From 7014486cf39abf3569cce450f8f6d2b5df052d09 Mon Sep 17 00:00:00 2001 From: w-bonelli Date: Thu, 29 Jun 2023 10:57:52 -0400 Subject: [PATCH 001/101] ci(release): update version to 3.dev4 --- CITATION.cff | 2 +- README.md | 18 ++++++++++-------- code.json | 4 ++-- docs/PyPI_release.md | 16 +++++++++------- flopy/DISCLAIMER.md | 14 ++++++++------ flopy/version.py | 4 ++-- version.txt | 2 +- 7 files changed, 33 insertions(+), 27 deletions(-) diff --git a/CITATION.cff b/CITATION.cff index a6dc9b9928..2699a2891e 100644 --- a/CITATION.cff +++ b/CITATION.cff @@ -3,7 +3,7 @@ message: If you use this software, please cite both the article from preferred-c and the software itself. type: software title: FloPy -version: 3.4.1 +version: 3.dev4 date-released: '2023-06-29' doi: 10.5066/F7BK19FH abstract: A Python package to create, run, and post-process MODFLOW-based models. diff --git a/README.md b/README.md index fda5f75b1c..5c79c5fa2a 100644 --- a/README.md +++ b/README.md @@ -1,7 +1,7 @@ flopy3 -### Version 3.4.1 +### Version 3.dev4 (preliminary) [![flopy continuous integration](https://github.com/modflowpy/flopy/actions/workflows/commit.yml/badge.svg?branch=develop)](https://github.com/modflowpy/flopy/actions/workflows/commit.yml) [![Read the Docs](https://github.com/modflowpy/flopy/actions/workflows/rtd.yml/badge.svg?branch=develop)](https://github.com/modflowpy/flopy/actions/workflows/rtd.yml) @@ -142,7 +142,7 @@ How to Cite ##### ***Software/Code citation for FloPy:*** -[Bakker, Mark, Post, Vincent, Hughes, J. D., Langevin, C. D., White, J. T., Leaf, A. T., Paulinski, S. R., Bellino, J. C., Morway, E. D., Toews, M. W., Larsen, J. D., Fienen, M. N., Starn, J. J., Brakenhoff, D. A., and Bonelli, W. P., 2023, FloPy v3.4.1: U.S. Geological Survey Software Release, 29 June 2023, https://doi.org/10.5066/F7BK19FH](https://doi.org/10.5066/F7BK19FH) +[Bakker, Mark, Post, Vincent, Hughes, J. D., Langevin, C. D., White, J. T., Leaf, A. T., Paulinski, S. R., Bellino, J. C., Morway, E. D., Toews, M. W., Larsen, J. D., Fienen, M. N., Starn, J. J., Brakenhoff, D. A., and Bonelli, W. P., 2023, FloPy v3.dev4 (preliminary): U.S. Geological Survey Software Release, 29 June 2023, https://doi.org/10.5066/F7BK19FH](https://doi.org/10.5066/F7BK19FH) Additional FloPy Related Publications @@ -167,10 +167,12 @@ MODFLOW Resources Disclaimer ---------- -This software is provided "as is" and "as-available", and makes no -representations or warranties of any kind concerning the software, whether -express, implied, statutory, or other. This includes, without limitation, -warranties of title, merchantability, fitness for a particular purpose, -non-infringement, absence of latent or other defects, accuracy, or the -presence or absence of errors, whether or not known or discoverable. +This software is preliminary or provisional and is subject to revision. It is +being provided to meet the need for timely best science. This software is +provided "as is" and "as-available", and makes no representations or warranties +of any kind concerning the software, whether express, implied, statutory, or +other. This includes, without limitation, warranties of title, +merchantability, fitness for a particular purpose, non-infringement, absence +of latent or other defects, accuracy, or the presence or absence of errors, +whether or not known or discoverable. diff --git a/code.json b/code.json index 0f84433995..94f89d2786 100644 --- a/code.json +++ b/code.json @@ -1,6 +1,6 @@ [ { - "status": "Release", + "status": "Preliminary", "languages": [ "python" ], @@ -29,7 +29,7 @@ "downloadURL": "https://code.usgs.gov/usgs/modflow/flopy/archive/master.zip", "vcs": "git", "laborHours": -1, - "version": "3.4.1", + "version": "3.dev4", "date": { "metadataLastUpdated": "2023-06-29" }, diff --git a/docs/PyPI_release.md b/docs/PyPI_release.md index 521e29b464..7a59879422 100644 --- a/docs/PyPI_release.md +++ b/docs/PyPI_release.md @@ -30,16 +30,18 @@ How to Cite *Software/Code citation for FloPy:* -[Bakker, Mark, Post, Vincent, Hughes, J. D., Langevin, C. D., White, J. T., Leaf, A. T., Paulinski, S. R., Bellino, J. C., Morway, E. D., Toews, M. W., Larsen, J. D., Fienen, M. N., Starn, J. J., Brakenhoff, D. A., and Bonelli, W. P., 2023, FloPy v3.4.1: U.S. Geological Survey Software Release, 29 June 2023, https://doi.org/10.5066/F7BK19FH](https://doi.org/10.5066/F7BK19FH) +[Bakker, Mark, Post, Vincent, Hughes, J. D., Langevin, C. D., White, J. T., Leaf, A. T., Paulinski, S. R., Bellino, J. C., Morway, E. D., Toews, M. W., Larsen, J. D., Fienen, M. N., Starn, J. J., Brakenhoff, D. A., and Bonelli, W. P., 2023, FloPy v3.dev4 (preliminary): U.S. Geological Survey Software Release, 29 June 2023, https://doi.org/10.5066/F7BK19FH](https://doi.org/10.5066/F7BK19FH) Disclaimer ---------- -This software is provided "as is" and "as-available", and makes no -representations or warranties of any kind concerning the software, whether -express, implied, statutory, or other. This includes, without limitation, -warranties of title, merchantability, fitness for a particular purpose, -non-infringement, absence of latent or other defects, accuracy, or the -presence or absence of errors, whether or not known or discoverable. +This software is preliminary or provisional and is subject to revision. It is +being provided to meet the need for timely best science. This software is +provided "as is" and "as-available", and makes no representations or warranties +of any kind concerning the software, whether express, implied, statutory, or +other. This includes, without limitation, warranties of title, +merchantability, fitness for a particular purpose, non-infringement, absence +of latent or other defects, accuracy, or the presence or absence of errors, +whether or not known or discoverable. diff --git a/flopy/DISCLAIMER.md b/flopy/DISCLAIMER.md index 0e68af88f7..81ba20d032 100644 --- a/flopy/DISCLAIMER.md +++ b/flopy/DISCLAIMER.md @@ -1,9 +1,11 @@ Disclaimer ---------- -This software is provided "as is" and "as-available", and makes no -representations or warranties of any kind concerning the software, whether -express, implied, statutory, or other. This includes, without limitation, -warranties of title, merchantability, fitness for a particular purpose, -non-infringement, absence of latent or other defects, accuracy, or the -presence or absence of errors, whether or not known or discoverable. +This software is preliminary or provisional and is subject to revision. It is +being provided to meet the need for timely best science. This software is +provided "as is" and "as-available", and makes no representations or warranties +of any kind concerning the software, whether express, implied, statutory, or +other. This includes, without limitation, warranties of title, +merchantability, fitness for a particular purpose, non-infringement, absence +of latent or other defects, accuracy, or the presence or absence of errors, +whether or not known or discoverable. diff --git a/flopy/version.py b/flopy/version.py index 82c1fc0ddd..7bc53ac2a9 100644 --- a/flopy/version.py +++ b/flopy/version.py @@ -1,3 +1,3 @@ -# flopy version file automatically created using update_version.py on June 29, 2023 14:20:34 +# flopy version file automatically created using update_version.py on June 29, 2023 10:56:39 -__version__ = "3.4.1" +__version__ = "3.dev4" diff --git a/version.txt b/version.txt index 8cf6caf561..93a1d0ad48 100644 --- a/version.txt +++ b/version.txt @@ -1 +1 @@ -3.4.1 \ No newline at end of file +3.dev4 \ No newline at end of file From 647507461b1f3befc6f92957982bdce5ebf812ab Mon Sep 17 00:00:00 2001 From: w-bonelli Date: Fri, 30 Jun 2023 09:01:35 -0400 Subject: [PATCH 002/101] chore(version): update dev version to 3.4.2.dev0 (#1861) --- CITATION.cff | 4 ++-- README.md | 4 ++-- code.json | 4 ++-- docs/PyPI_release.md | 2 +- flopy/version.py | 4 ++-- scripts/update_version.py | 2 +- version.txt | 2 +- 7 files changed, 11 insertions(+), 11 deletions(-) diff --git a/CITATION.cff b/CITATION.cff index 2699a2891e..d32c914689 100644 --- a/CITATION.cff +++ b/CITATION.cff @@ -3,8 +3,8 @@ message: If you use this software, please cite both the article from preferred-c and the software itself. type: software title: FloPy -version: 3.dev4 -date-released: '2023-06-29' +version: 3.4.2.dev0 +date-released: '2023-06-30' doi: 10.5066/F7BK19FH abstract: A Python package to create, run, and post-process MODFLOW-based models. repository-artifact: https://pypi.org/project/flopy diff --git a/README.md b/README.md index 5c79c5fa2a..233177e147 100644 --- a/README.md +++ b/README.md @@ -1,7 +1,7 @@ flopy3 -### Version 3.dev4 (preliminary) +### Version 3.4.2.dev0 (preliminary) [![flopy continuous integration](https://github.com/modflowpy/flopy/actions/workflows/commit.yml/badge.svg?branch=develop)](https://github.com/modflowpy/flopy/actions/workflows/commit.yml) [![Read the Docs](https://github.com/modflowpy/flopy/actions/workflows/rtd.yml/badge.svg?branch=develop)](https://github.com/modflowpy/flopy/actions/workflows/rtd.yml) @@ -142,7 +142,7 @@ How to Cite ##### ***Software/Code citation for FloPy:*** -[Bakker, Mark, Post, Vincent, Hughes, J. D., Langevin, C. D., White, J. T., Leaf, A. T., Paulinski, S. R., Bellino, J. C., Morway, E. D., Toews, M. W., Larsen, J. D., Fienen, M. N., Starn, J. J., Brakenhoff, D. A., and Bonelli, W. P., 2023, FloPy v3.dev4 (preliminary): U.S. Geological Survey Software Release, 29 June 2023, https://doi.org/10.5066/F7BK19FH](https://doi.org/10.5066/F7BK19FH) +[Bakker, Mark, Post, Vincent, Hughes, J. D., Langevin, C. D., White, J. T., Leaf, A. T., Paulinski, S. R., Bellino, J. C., Morway, E. D., Toews, M. W., Larsen, J. D., Fienen, M. N., Starn, J. J., Brakenhoff, D. A., and Bonelli, W. P., 2023, FloPy v3.4.2.dev0 (preliminary): U.S. Geological Survey Software Release, 30 June 2023, https://doi.org/10.5066/F7BK19FH](https://doi.org/10.5066/F7BK19FH) Additional FloPy Related Publications diff --git a/code.json b/code.json index 94f89d2786..ca24c1e542 100644 --- a/code.json +++ b/code.json @@ -29,9 +29,9 @@ "downloadURL": "https://code.usgs.gov/usgs/modflow/flopy/archive/master.zip", "vcs": "git", "laborHours": -1, - "version": "3.dev4", + "version": "3.4.2.dev0", "date": { - "metadataLastUpdated": "2023-06-29" + "metadataLastUpdated": "2023-06-30" }, "organization": "U.S. Geological Survey", "permissions": { diff --git a/docs/PyPI_release.md b/docs/PyPI_release.md index 7a59879422..9ea6c3264e 100644 --- a/docs/PyPI_release.md +++ b/docs/PyPI_release.md @@ -30,7 +30,7 @@ How to Cite *Software/Code citation for FloPy:* -[Bakker, Mark, Post, Vincent, Hughes, J. D., Langevin, C. D., White, J. T., Leaf, A. T., Paulinski, S. R., Bellino, J. C., Morway, E. D., Toews, M. W., Larsen, J. D., Fienen, M. N., Starn, J. J., Brakenhoff, D. A., and Bonelli, W. P., 2023, FloPy v3.dev4 (preliminary): U.S. Geological Survey Software Release, 29 June 2023, https://doi.org/10.5066/F7BK19FH](https://doi.org/10.5066/F7BK19FH) +[Bakker, Mark, Post, Vincent, Hughes, J. D., Langevin, C. D., White, J. T., Leaf, A. T., Paulinski, S. R., Bellino, J. C., Morway, E. D., Toews, M. W., Larsen, J. D., Fienen, M. N., Starn, J. J., Brakenhoff, D. A., and Bonelli, W. P., 2023, FloPy v3.4.2.dev0 (preliminary): U.S. Geological Survey Software Release, 30 June 2023, https://doi.org/10.5066/F7BK19FH](https://doi.org/10.5066/F7BK19FH) Disclaimer diff --git a/flopy/version.py b/flopy/version.py index 7bc53ac2a9..23af23a369 100644 --- a/flopy/version.py +++ b/flopy/version.py @@ -1,3 +1,3 @@ -# flopy version file automatically created using update_version.py on June 29, 2023 10:56:39 +# flopy version file automatically created using update_version.py on June 30, 2023 08:29:09 -__version__ = "3.dev4" +__version__ = "3.4.2.dev0" diff --git a/scripts/update_version.py b/scripts/update_version.py index 8a1e11335a..c7ecdd369a 100644 --- a/scripts/update_version.py +++ b/scripts/update_version.py @@ -80,7 +80,7 @@ def update_version_py(timestamp: datetime, version: Version): f"# {_project_name} version file automatically created using " f"{Path(__file__).name} on {timestamp:%B %d, %Y %H:%M:%S}\n\n" ) - f.write(f"__version__ = '{version}'\n") + f.write(f'__version__ = "{version}"\n') f.close() print(f"Updated {_version_py_path} to version {version}") diff --git a/version.txt b/version.txt index 93a1d0ad48..7387b31eaa 100644 --- a/version.txt +++ b/version.txt @@ -1 +1 @@ -3.dev4 \ No newline at end of file +3.4.2.dev0 \ No newline at end of file From 2bbdfb191d5855c9b9ab14e0809a31cf3230b91a Mon Sep 17 00:00:00 2001 From: w-bonelli Date: Fri, 30 Jun 2023 17:44:21 -0400 Subject: [PATCH 003/101] test: minor fixes to notebook/script tests (#1862) --- autotest/test_example_scripts.py | 12 ------------ autotest/test_notebooks.py | 8 +++++++- 2 files changed, 7 insertions(+), 13 deletions(-) diff --git a/autotest/test_example_scripts.py b/autotest/test_example_scripts.py index b43e7787bf..8fd95d9e5a 100644 --- a/autotest/test_example_scripts.py +++ b/autotest/test_example_scripts.py @@ -1,6 +1,5 @@ import re from functools import reduce -from os import linesep from pprint import pprint import pytest @@ -35,7 +34,6 @@ def get_example_scripts(exclude=None): @pytest.mark.parametrize("script", get_example_scripts()) def test_scripts(script): stdout, stderr, returncode = run_py_script(script, verbose=True) - if returncode != 0: if "Missing optional dependency" in stderr: pkg = re.findall("Missing optional dependency '(.*)'", stderr)[0] @@ -44,13 +42,3 @@ def test_scripts(script): assert returncode == 0 pprint(stdout) pprint(stderr) - # allowed_patterns = ["findfont", "warning", "loose", "match_original"] - # assert ( - # not stderr - # or - # # trap warnings & non-fatal errors - # all( - # (not line or any(p in line.lower() for p in allowed_patterns)) - # for line in stderr.split(linesep) - # ) - # ) diff --git a/autotest/test_notebooks.py b/autotest/test_notebooks.py index 47629c1481..17967c9280 100644 --- a/autotest/test_notebooks.py +++ b/autotest/test_notebooks.py @@ -1,4 +1,6 @@ import re +from pathlib import Path +from pprint import pprint import pytest from autotest.conftest import get_project_root_path @@ -11,8 +13,10 @@ def get_notebooks(pattern=None, exclude=None): nbpaths = [ str(p) for p in (prjroot / ".docs" / "Notebooks").glob("*.py") - if pattern in p.name + if pattern is None or pattern in p.name ] + + # sort for pytest-xdist: workers must collect tests in the same order return sorted( [p for p in nbpaths if not exclude or not any(e in p for e in exclude)] ) @@ -39,3 +43,5 @@ def test_notebooks(notebook): pytest.skip(f"notebook requires package {pkg!r}") assert returncode == 0, f"could not run {notebook}" + pprint(stdout) + pprint(stderr) From 0e13703df4d695b285586c81427e5eb254ac2cc7 Mon Sep 17 00:00:00 2001 From: w-bonelli Date: Sat, 1 Jul 2023 16:27:03 -0400 Subject: [PATCH 004/101] ci(release): fix publish to PyPi step (#1863) --- .github/workflows/release.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml index 44b7a25398..6ff6af9cf4 100644 --- a/.github/workflows/release.yml +++ b/.github/workflows/release.yml @@ -236,4 +236,4 @@ jobs: path: dist - name: Publish to PyPI - run: python -m twine upload dist/* + uses: pypa/gh-action-pypi-publish@release/v1 From a16a379ef61cc16594ac6ac9eadb5ce4c6a4cee1 Mon Sep 17 00:00:00 2001 From: spaulins-usgs Date: Mon, 10 Jul 2023 09:08:28 -0700 Subject: [PATCH 005/101] feat(simulation+model options): Dynamically generate simulation options from simulation namefile dfn (#1842) * feat(simulation+model options): Simulation and model file initialization parameters and attributes are now dynamically generated to include the namefile options found in the dfn file. * lint + imports --------- Co-authored-by: scottrp <45947939+scottrp@users.noreply.github.com> --- autotest/regression/test_mf6.py | 8 + autotest/test_copy.py | 3 +- autotest/test_mf6.py | 4 +- flopy/mf6/data/mfdatalist.py | 2 + flopy/mf6/data/mfdatascalar.py | 2 + flopy/mf6/data/mfdatautil.py | 65 +- flopy/mf6/mfmodel.py | 29 +- flopy/mf6/mfsimbase.py | 2530 ++++++++++++++++++++++++++ flopy/mf6/modflow/mfgwf.py | 9 +- flopy/mf6/modflow/mfgwt.py | 8 +- flopy/mf6/modflow/mfsimulation.py | 2548 +-------------------------- flopy/mf6/utils/createpackages.py | 176 +- flopy/mf6/utils/generate_classes.py | 2 +- 13 files changed, 2860 insertions(+), 2526 deletions(-) create mode 100644 flopy/mf6/mfsimbase.py diff --git a/autotest/regression/test_mf6.py b/autotest/regression/test_mf6.py index f46e47193e..a8be56ac57 100644 --- a/autotest/regression/test_mf6.py +++ b/autotest/regression/test_mf6.py @@ -918,7 +918,10 @@ def test021_twri(function_tmpdir, example_data_path): version="mf6", exe_name="mf6", sim_ws=data_folder, + memory_print_option="SUMMARY", ) + sim.nocheck = True + sim.set_sim_path(function_tmpdir) tdis_rc = [(86400.0, 1, 1.0)] tdis_package = ModflowTdis( @@ -927,6 +930,8 @@ def test021_twri(function_tmpdir, example_data_path): model = ModflowGwf( sim, modelname=model_name, model_nam_file=f"{model_name}.nam" ) + model.print_input = True + ims_package = ModflowIms( sim, print_option="SUMMARY", @@ -1098,6 +1103,9 @@ def test021_twri(function_tmpdir, example_data_path): strt2 = ic2.strt.get_data() drn2 = model2.get_package("drn") drn_spd = drn2.stress_period_data.get_data() + assert sim2.memory_print_option.get_data().lower() == "summary" + assert sim2.nocheck.get_data() is True + assert model2.print_input.get_data() is True assert strt2[0, 0, 0] == 0.0 assert strt2[1, 0, 0] == 1.0 assert strt2[2, 0, 0] == 2.0 diff --git a/autotest/test_copy.py b/autotest/test_copy.py index 10668bbdd7..ee8cdf9264 100644 --- a/autotest/test_copy.py +++ b/autotest/test_copy.py @@ -10,7 +10,8 @@ from flopy.mbase import ModelInterface from flopy.mf6.data.mfdatalist import MFList, MFTransientList from flopy.mf6.mfpackage import MFChildPackages, MFPackage -from flopy.mf6.modflow.mfsimulation import MFSimulation, MFSimulationData +from flopy.mf6.mfsimbase import MFSimulationData +from flopy.mf6.modflow.mfsimulation import MFSimulation from flopy.modflow import Modflow from flopy.utils import TemporalReference diff --git a/autotest/test_mf6.py b/autotest/test_mf6.py index 39207dfee1..f5c40834b4 100644 --- a/autotest/test_mf6.py +++ b/autotest/test_mf6.py @@ -1,6 +1,5 @@ import os import platform -from os.path import join from pathlib import Path from shutil import copytree, which @@ -58,7 +57,7 @@ ) from flopy.mf6.data.mffileaccess import MFFileAccessArray from flopy.mf6.data.mfstructure import MFDataItemStructure, MFDataStructure -from flopy.mf6.mfbase import MFFileMgmt +from flopy.mf6.mfsimbase import MFSimulationData from flopy.mf6.modflow import ( mfgwf, mfgwfdis, @@ -72,7 +71,6 @@ mfims, mftdis, ) -from flopy.mf6.modflow.mfsimulation import MFSimulationData from flopy.utils import ( CellBudgetFile, HeadFile, diff --git a/flopy/mf6/data/mfdatalist.py b/flopy/mf6/data/mfdatalist.py index c9635b458f..a62916c32f 100644 --- a/flopy/mf6/data/mfdatalist.py +++ b/flopy/mf6/data/mfdatalist.py @@ -1944,6 +1944,8 @@ def _set_data_record( self._simulation_data.debug, ) if key is None: + if data is None: + return # search for a key new_key_index = self.structure.first_non_keyword_index() if new_key_index is not None and len(data) > new_key_index: diff --git a/flopy/mf6/data/mfdatascalar.py b/flopy/mf6/data/mfdatascalar.py index 0e625cf33f..7c8421167d 100644 --- a/flopy/mf6/data/mfdatascalar.py +++ b/flopy/mf6/data/mfdatascalar.py @@ -822,6 +822,8 @@ def set_data(self, data, key=None): if `data` is a dictionary. """ + if data is None and key is None: + return if isinstance(data, dict): # each item in the dictionary is a list for one stress period # the dictionary key is the stress period the list is for diff --git a/flopy/mf6/data/mfdatautil.py b/flopy/mf6/data/mfdatautil.py index dfe8470608..390903ebeb 100644 --- a/flopy/mf6/data/mfdatautil.py +++ b/flopy/mf6/data/mfdatautil.py @@ -103,33 +103,34 @@ def convert_data(data, data_dimensions, data_type, data_item=None, sub_amt=1): False, ) elif data_type == DatumType.integer: - if data_item is not None and data_item.numeric_index: - return int(PyListUtil.clean_numeric(data)) - sub_amt - try: - return int(data) - except (ValueError, TypeError): + if data is not None: + if data_item is not None and data_item.numeric_index: + return int(PyListUtil.clean_numeric(data)) - sub_amt try: - return int(PyListUtil.clean_numeric(data)) + return int(data) except (ValueError, TypeError): - message = ( - 'Data "{}" with value "{}" can not be ' - "converted to int" - ".".format(data_dimensions.structure.name, data) - ) - type_, value_, traceback_ = sys.exc_info() - raise MFDataException( - data_dimensions.structure.get_model(), - data_dimensions.structure.get_package(), - data_dimensions.structure.path, - "converting data", - data_dimensions.structure.name, - inspect.stack()[0][3], - type_, - value_, - traceback_, - message, - False, - ) + try: + return int(PyListUtil.clean_numeric(data)) + except (ValueError, TypeError): + message = ( + 'Data "{}" with value "{}" can not be ' + "converted to int" + ".".format(data_dimensions.structure.name, data) + ) + type_, value_, traceback_ = sys.exc_info() + raise MFDataException( + data_dimensions.structure.get_model(), + data_dimensions.structure.get_package(), + data_dimensions.structure.path, + "converting data", + data_dimensions.structure.name, + inspect.stack()[0][3], + type_, + value_, + traceback_, + message, + False, + ) elif data_type == DatumType.string and data is not None: if data_item is None or not data_item.preserve_case: # keep strings lower case @@ -892,7 +893,7 @@ def add_parameter( if model_parameter: self.model_parameters.append(param_descr) - def get_doc_string(self, model_doc_string=False): + def get_doc_string(self, model_doc_string=False, sim_doc_string=False): doc_string = '{}"""\n{}{}\n\n{}\n'.format( self.indent, self.indent, self.description, self.parameter_header ) @@ -912,7 +913,17 @@ def get_doc_string(self, model_doc_string=False): else: param_list = self.parameters for parameter in param_list: + if sim_doc_string: + pclean = parameter.strip() + if ( + pclean.startswith("simulation") + or pclean.startswith("loading_package") + or pclean.startswith("filename") + or pclean.startswith("pname") + or pclean.startswith("parent_file") + ): + continue doc_string += f"{parameter}\n" - if not model_doc_string: + if not (model_doc_string or sim_doc_string): doc_string += f'\n{self.indent}"""' return doc_string diff --git a/flopy/mf6/mfmodel.py b/flopy/mf6/mfmodel.py index a47cf106c4..be20e0c849 100644 --- a/flopy/mf6/mfmodel.py +++ b/flopy/mf6/mfmodel.py @@ -14,7 +14,7 @@ from ..utils import datautil from ..utils.check import mf6check from .coordinates import modeldimensions -from .data import mfstructure +from .data import mfdata, mfdatalist, mfstructure from .data.mfdatautil import DataSearchOutput, iterable from .mfbase import ( ExtFileAction, @@ -182,6 +182,24 @@ def __getattr__(self, item): return package raise AttributeError(item) + def __setattr__(self, name, value): + if hasattr(self, name) and getattr(self, name) is not None: + attribute = object.__getattribute__(self, name) + if attribute is not None and isinstance(attribute, mfdata.MFData): + try: + if isinstance(attribute, mfdatalist.MFList): + attribute.set_data(value, autofill=True) + else: + attribute.set_data(value) + except MFDataException as mfde: + raise MFDataException( + mfdata_except=mfde, + model=self.name, + package="", + ) + return + super().__setattr__(name, value) + def __repr__(self): return self._get_data_str(True) @@ -677,9 +695,9 @@ def check(self, f=None, verbose=True, level=1): return self._check(chk, level) - @classmethod + @staticmethod def load_base( - cls, + cls_child, simulation, structure, modelname="NewModel", @@ -729,9 +747,8 @@ def load_base( Examples -------- """ - instance = cls( + instance = cls_child( simulation, - mtype, modelname, model_nam_file=model_nam_file, version=version, @@ -863,7 +880,7 @@ def inspect_cells( -------- >>> import flopy - >>> sim = flopy.mf6.MFSimulation.load("name", "mf6", "mf6", ".") + >>> sim = flopy.mf6.MFSimulationBase.load("name", "mf6", "mf6", ".") >>> model = sim.get_model() >>> inspect_list = [(2, 3, 2), (0, 4, 2), (0, 2, 4)] >>> out_file = os.path.join("temp", "inspect_AdvGW_tidal.csv") diff --git a/flopy/mf6/mfsimbase.py b/flopy/mf6/mfsimbase.py new file mode 100644 index 0000000000..dc2da65c81 --- /dev/null +++ b/flopy/mf6/mfsimbase.py @@ -0,0 +1,2530 @@ +import errno +import inspect +import os.path +import sys +import warnings +from pathlib import Path +from typing import List, Optional, Union + +import numpy as np + +from flopy.mbase import run_model +from flopy.mf6.data import mfdata, mfdatalist, mfstructure +from flopy.mf6.data.mfdatautil import MFComment +from flopy.mf6.data.mfstructure import DatumType +from flopy.mf6.mfbase import ( + ExtFileAction, + FlopyException, + MFDataException, + MFFileMgmt, + PackageContainer, + PackageContainerType, + VerbosityLevel, +) +from flopy.mf6.mfpackage import MFPackage +from flopy.mf6.modflow import mfnam, mftdis +from flopy.mf6.utils import binaryfile_utils, mfobservation + + +class SimulationDict(dict): + """ + Class containing custom dictionary for MODFLOW simulations. Dictionary + contains model data. Dictionary keys are "paths" to the data that include + the model and package containing the data. + + Behaves as an dict with some additional features described below. + + Parameters + ---------- + path : MFFileMgmt + Object containing path information for the simulation + + """ + + def __init__(self, path=None): + dict.__init__(self) + self._path = path + + def __getitem__(self, key): + """ + Define the __getitem__ magic method. + + Parameters + ---------- + key (string): Part or all of a dictionary key + + Returns: + MFData or numpy.ndarray + + """ + if key == "_path" or not hasattr(self, "_path"): + raise AttributeError(key) + + # FIX: Transport - Include transport output files + if key[1] in ("CBC", "HDS", "DDN", "UCN"): + val = binaryfile_utils.MFOutput(self, self._path, key) + return val.data + + elif key[-1] == "Observations": + val = mfobservation.MFObservation(self, self._path, key) + return val.data + + if key in self: + val = dict.__getitem__(self, key) + return val + return AttributeError(key) + + def __setitem__(self, key, val): + """ + Define the __setitem__ magic method. + + Parameters + ---------- + key : str + Dictionary key + val : MFData + MFData to store in dictionary + + """ + dict.__setitem__(self, key, val) + + def find_in_path(self, key_path, key_leaf): + """ + Attempt to find key_leaf in a partial key path key_path. + + Parameters + ---------- + key_path : str + partial path to the data + key_leaf : str + name of the data + + Returns + ------- + Data: MFData, + index: int + + """ + key_path_size = len(key_path) + for key, item in self.items(): + if key[:key_path_size] == key_path: + if key[-1] == key_leaf: + # found key_leaf as a key in the dictionary + return item, None + if not isinstance(item, MFComment): + data_item_index = 0 + data_item_structures = item.structure.data_item_structures + for data_item_struct in data_item_structures: + if data_item_struct.name == key_leaf: + # found key_leaf as a data item name in the data + # in the dictionary + return item, data_item_index + if data_item_struct.type != DatumType.keyword: + data_item_index += 1 + return None, None + + def output_keys(self, print_keys=True): + """ + Return a list of output data keys supported by the dictionary. + + Parameters + ---------- + print_keys : bool + print keys to console + + Returns + ------- + output keys : list + + """ + # get keys to request binary output + x = binaryfile_utils.MFOutputRequester.getkeys( + self, self._path, print_keys=print_keys + ) + return [key for key in x.dataDict] + + def input_keys(self): + """ + Return a list of input data keys. + + Returns + ------- + input keys : list + + """ + # get keys to request input ie. package data + for key in self: + print(key) + + def observation_keys(self): + """ + Return a list of observation keys. + + Returns + ------- + observation keys : list + + """ + # get keys to request observation file output + mfobservation.MFObservationRequester.getkeys(self, self._path) + + def keys(self): + """ + Return a list of all keys. + + Returns + ------- + all keys : list + + """ + # overrides the built in keys to print all keys, input and output + self.input_keys() + try: + self.output_keys() + except OSError as e: + if e.errno == errno.EEXIST: + pass + try: + self.observation_keys() + except KeyError: + pass + + +class MFSimulationData: + """ + Class containing MODFLOW simulation data and file formatting data. Use + MFSimulationData to set simulation-wide settings which include data + formatting and file location settings. + + Parameters + ---------- + path : str + path on disk to the simulation + + Attributes + ---------- + indent_string : str + String used to define how much indent to use (file formatting) + internal_formatting : list + List defining string to use for internal formatting + external_formatting : list + List defining string to use for external formatting + open_close_formatting : list + List defining string to use for open/close + max_columns_of_data : int + Maximum columns of data before line wraps. For structured grids this + is set to ncol by default. For all other grids the default is 20. + wrap_multidim_arrays : bool + Whether to wrap line for multi-dimensional arrays at the end of a + row/column/layer + _float_precision : int + Number of decimal points to write for a floating point number + _float_characters : int + Number of characters a floating point number takes up + write_headers: bool + When true flopy writes a header to each package file indicating that + it was created by flopy + sci_note_upper_thres : float + Numbers greater than this threshold are written in scientific notation + sci_note_lower_thres : float + Numbers less than this threshold are written in scientific notation + mfpath : MFFileMgmt + File path location information for the simulation + model_dimensions : dict + Dictionary containing discretization information for each model + mfdata : SimulationDict + Custom dictionary containing all model data for the simulation + + """ + + def __init__(self, path: Union[str, os.PathLike], mfsim): + # --- formatting variables --- + self.indent_string = " " + self.constant_formatting = ["constant", ""] + self._max_columns_of_data = 20 + self.wrap_multidim_arrays = True + self._float_precision = 8 + self._float_characters = 15 + self.write_headers = True + self._sci_note_upper_thres = 100000 + self._sci_note_lower_thres = 0.001 + self.fast_write = True + self.comments_on = False + self.auto_set_sizes = True + self.verify_data = True + self.debug = False + self.verbose = True + self.verbosity_level = VerbosityLevel.normal + self.max_columns_user_set = False + self.max_columns_auto_set = False + + self._update_str_format() + + # --- file path --- + self.mfpath = MFFileMgmt(path, mfsim) + + # --- ease of use variables to make working with modflow input and + # output data easier --- model dimension class for each model + self.model_dimensions = {} + + # --- model data --- + self.mfdata = SimulationDict(self.mfpath) + + # --- temporary variables --- + # other external files referenced + self.referenced_files = {} + + @property + def lazy_io(self): + if not self.auto_set_sizes and not self.verify_data: + return True + return False + + @lazy_io.setter + def lazy_io(self, val): + if val: + self.auto_set_sizes = False + self.verify_data = False + else: + self.auto_set_sizes = True + self.verify_data = True + + @property + def max_columns_of_data(self): + return self._max_columns_of_data + + @max_columns_of_data.setter + def max_columns_of_data(self, val): + if not self.max_columns_user_set and ( + not self.max_columns_auto_set or val > self._max_columns_of_data + ): + self._max_columns_of_data = val + self.max_columns_user_set = True + + @property + def float_precision(self): + """ + Gets precision of floating point numbers. + """ + return self._float_precision + + @float_precision.setter + def float_precision(self, value): + """ + Sets precision of floating point numbers. + + Parameters + ---------- + value: float + floating point precision + + """ + self._float_precision = value + self._update_str_format() + + @property + def float_characters(self): + """ + Gets max characters used in floating point numbers. + """ + return self._float_characters + + @float_characters.setter + def float_characters(self, value): + """ + Sets max characters used in floating point numbers. + + Parameters + ---------- + value: float + floating point max characters + + """ + self._float_characters = value + self._update_str_format() + + def set_sci_note_upper_thres(self, value): + """ + Sets threshold number where any number larger than threshold + is represented in scientific notation. + + Parameters + ---------- + value: float + threshold value + + """ + self._sci_note_upper_thres = value + self._update_str_format() + + def set_sci_note_lower_thres(self, value): + """ + Sets threshold number where any number smaller than threshold + is represented in scientific notation. + + Parameters + ---------- + value: float + threshold value + + """ + self._sci_note_lower_thres = value + self._update_str_format() + + def _update_str_format(self): + """ + Update floating point formatting strings.""" + self.reg_format_str = f"{{:.{self._float_precision}E}}" + self.sci_format_str = ( + f"{{:{self._float_characters}.{self._float_precision}f}}" + ) + + +class MFSimulationBase(PackageContainer): + """ + Entry point into any MODFLOW simulation. + + MFSimulation is used to load, build, and/or save a MODFLOW 6 simulation. + A MFSimulation object must be created before creating any of the MODFLOW 6 + model objects. + + Parameters + ---------- + sim_name : str + Name of the simulation. + version : str + Version of MODFLOW 6 executable + exe_name : str + Path to MODFLOW 6 executable + sim_ws : str + Path to MODFLOW 6 simulation working folder. This is the folder + containing the simulation name file. + verbosity_level : int + Verbosity level of standard output from 0 to 2. When 0 is specified no + standard output is written. When 1 is specified standard + error/warning messages with some informational messages are written. + When 2 is specified full error/warning/informational messages are + written (this is ideal for debugging). + continue_ : bool + Sets the continue option in the simulation name file. The continue + option is a keyword flag to indicate that the simulation should + continue even if one or more solutions do not converge. + nocheck : bool + Sets the nocheck option in the simulation name file. The nocheck + option is a keyword flag to indicate that the model input check + routines should not be called prior to each time step. Checks + are performed by default. + memory_print_option : str + Sets memory_print_option in the simulation name file. + Memory_print_option is a flag that controls printing of detailed + memory manager usage to the end of the simulation list file. NONE + means do not print detailed information. SUMMARY means print only + the total memory for each simulation component. ALL means print + information for each variable stored in the memory manager. NONE is + default if memory_print_option is not specified. + write_headers: bool + When true flopy writes a header to each package file indicating that + it was created by flopy. + lazy_io: bool + When true flopy only reads external data when the data is requested + and only writes external data if the data has changed. This option + automatically overrides the verify_data and auto_set_sizes, turning + both off. + Examples + -------- + >>> s = MFSimulationBase.load('my simulation', 'simulation.nam') + + Attributes + ---------- + sim_name : str + Name of the simulation + name_file : MFPackage + Simulation name file package + + """ + + def __init__( + self, + sim_name="sim", + version="mf6", + exe_name: Union[str, os.PathLike] = "mf6", + sim_ws: Union[str, os.PathLike] = os.curdir, + verbosity_level=1, + continue_=None, + nocheck=None, + memory_print_option=None, + write_headers=True, + lazy_io=False, + ): + super().__init__(MFSimulationData(sim_ws, self), sim_name) + self.simulation_data.verbosity_level = self._resolve_verbosity_level( + verbosity_level + ) + self.simulation_data.write_headers = write_headers + if lazy_io: + self.simulation_data.lazy_io = True + + # verify metadata + fpdata = mfstructure.MFStructure() + if not fpdata.valid: + excpt_str = ( + "Invalid package metadata. Unable to load MODFLOW " + "file structure metadata." + ) + raise FlopyException(excpt_str) + + # initialize + self.dimensions = None + self.type = "Simulation" + + self.version = version + self.exe_name = exe_name + self._models = {} + self._tdis_file = None + self._exchange_files = {} + self._solution_files = {} + self._other_files = {} + self.structure = fpdata.sim_struct + self.model_type = None + + self._exg_file_num = {} + + self.simulation_data.mfpath.set_last_accessed_path() + + # build simulation name file + self.name_file = mfnam.ModflowNam( + self, + filename="mfsim.nam", + continue_=continue_, + nocheck=nocheck, + memory_print_option=memory_print_option, + _internal_package=True, + ) + + # try to build directory structure + sim_path = self.simulation_data.mfpath.get_sim_path() + if not os.path.isdir(sim_path): + try: + os.makedirs(sim_path) + except OSError as e: + if ( + self.simulation_data.verbosity_level.value + >= VerbosityLevel.quiet.value + ): + print( + "An error occurred when trying to create the " + "directory {}: {}".format(sim_path, e.strerror) + ) + + # set simulation validity initially to false since the user must first + # add at least one model to the simulation and fill out the name and + # tdis files + self.valid = False + + def __getattr__(self, item): + """ + Override __getattr__ to allow retrieving models. + + __getattr__ is used to allow for getting models and packages as if + they are attributes + + Parameters + ---------- + item : str + model or package name + + + Returns + ------- + md : Model or package object + Model or package object of type :class:flopy6.mfmodel or + :class:flopy6.mfpackage + + """ + if item == "valid" or not hasattr(self, "valid"): + raise AttributeError(item) + + models = [] + if item in self.structure.model_types: + # get all models of this type + for model in self._models.values(): + if model.model_type == item or model.model_type[:-1] == item: + models.append(model) + + if len(models) > 0: + return models + elif item in self._models: + model = self.get_model(item) + if model is not None: + return model + raise AttributeError(item) + else: + package = self.get_package(item) + if package is not None: + return package + raise AttributeError(item) + + def __setattr__(self, name, value): + if hasattr(self, name) and getattr(self, name) is not None: + attribute = object.__getattribute__(self, name) + if attribute is not None and isinstance(attribute, mfdata.MFData): + try: + if isinstance(attribute, mfdatalist.MFList): + attribute.set_data(value, autofill=True) + else: + attribute.set_data(value) + except MFDataException as mfde: + raise MFDataException( + mfdata_except=mfde, + model="", + package="", + ) + return + super().__setattr__(name, value) + + def __repr__(self): + """ + Override __repr__ to print custom string. + + Returns + -------- + repr string : str + string describing object + + """ + return self._get_data_str(True) + + def __str__(self): + """ + Override __str__ to print custom string. + + Returns + -------- + str string : str + string describing object + + """ + return self._get_data_str(False) + + def _get_data_str(self, formal): + file_mgt = self.simulation_data.mfpath + data_str = ( + "sim_name = {}\nsim_path = {}\nexe_name = " + "{}\n" + "\n".format(self.name, file_mgt.get_sim_path(), self.exe_name) + ) + + for package in self._packagelist: + pk_str = package._get_data_str(formal, False) + if formal: + if len(pk_str.strip()) > 0: + data_str = ( + "{}###################\nPackage {}\n" + "###################\n\n" + "{}\n".format(data_str, package._get_pname(), pk_str) + ) + else: + if len(pk_str.strip()) > 0: + data_str = ( + "{}###################\nPackage {}\n" + "###################\n\n" + "{}\n".format(data_str, package._get_pname(), pk_str) + ) + for model in self._models.values(): + if formal: + mod_repr = repr(model) + if len(mod_repr.strip()) > 0: + data_str = ( + "{}@@@@@@@@@@@@@@@@@@@@\nModel {}\n" + "@@@@@@@@@@@@@@@@@@@@\n\n" + "{}\n".format(data_str, model.name, mod_repr) + ) + else: + mod_str = str(model) + if len(mod_str.strip()) > 0: + data_str = ( + "{}@@@@@@@@@@@@@@@@@@@@\nModel {}\n" + "@@@@@@@@@@@@@@@@@@@@\n\n" + "{}\n".format(data_str, model.name, mod_str) + ) + return data_str + + @property + def model_names(self): + """ + Return a list of model names associated with this simulation. + + Returns + -------- + list: list of model names + + """ + return list(self._models.keys()) + + @property + def exchange_files(self): + """ + Return list of exchange files associated with this simulation. + + Returns + -------- + list: list of exchange names + + """ + return self._exchange_files.values() + + @staticmethod + def load( + cls_child, + sim_name="modflowsim", + version="mf6", + exe_name: Union[str, os.PathLike] = "mf6", + sim_ws: Union[str, os.PathLike] = os.curdir, + strict=True, + verbosity_level=1, + load_only=None, + verify_data=False, + write_headers=True, + lazy_io=False, + ): + """ + Load an existing model. Do not call this method directly. Should only + be called by child class. + + Parameters + ---------- + cls_child : + cls object of child class calling load + sim_name : str + Name of the simulation. + version : str + MODFLOW version + exe_name : str or PathLike + Path to MODFLOW executable (relative to the simulation workspace or absolute) + sim_ws : str or PathLike + Path to simulation workspace + strict : bool + Strict enforcement of file formatting + verbosity_level : int + Verbosity level of standard output + 0: No standard output + 1: Standard error/warning messages with some informational + messages + 2: Verbose mode with full error/warning/informational + messages. This is ideal for debugging. + load_only : list + List of package abbreviations or package names corresponding to + packages that flopy will load. default is None, which loads all + packages. the discretization packages will load regardless of this + setting. subpackages, like time series and observations, will also + load regardless of this setting. + example list: ['ic', 'maw', 'npf', 'oc', 'ims', 'gwf6-gwf6'] + verify_data : bool + Verify data when it is loaded. this can slow down loading + write_headers: bool + When true flopy writes a header to each package file indicating + that it was created by flopy + lazy_io: bool + When true flopy only reads external data when the data is requested + and only writes external data if the data has changed. This option + automatically overrides the verify_data and auto_set_sizes, turning + both off. + Returns + ------- + sim : MFSimulation object + + Examples + -------- + >>> s = flopy.mf6.mfsimulation.load('my simulation') + + """ + # initialize + instance = cls_child( + sim_name, + version, + exe_name, + sim_ws, + verbosity_level, + write_headers=write_headers, + ) + verbosity_level = instance.simulation_data.verbosity_level + + instance.simulation_data.verify_data = verify_data + if lazy_io: + instance.simulation_data.lazy_io = True + + if verbosity_level.value >= VerbosityLevel.normal.value: + print("loading simulation...") + + # build case consistent load_only dictionary for quick lookups + load_only = instance._load_only_dict(load_only) + + # load simulation name file + if verbosity_level.value >= VerbosityLevel.normal.value: + print(" loading simulation name file...") + instance.name_file.load(strict) + + # load TDIS file + tdis_pkg = f"tdis{mfstructure.MFStructure().get_version_string()}" + tdis_attr = getattr(instance.name_file, tdis_pkg) + instance._tdis_file = mftdis.ModflowTdis( + instance, filename=tdis_attr.get_data() + ) + + instance._tdis_file._filename = instance.simulation_data.mfdata[ + ("nam", "timing", tdis_pkg) + ].get_data() + if verbosity_level.value >= VerbosityLevel.normal.value: + print(" loading tdis package...") + instance._tdis_file.load(strict) + + # load models + try: + model_recarray = instance.simulation_data.mfdata[ + ("nam", "models", "models") + ] + models = model_recarray.get_data() + except MFDataException as mfde: + message = ( + "Error occurred while loading model names from the " + "simulation name file." + ) + raise MFDataException( + mfdata_except=mfde, + model=instance.name, + package="nam", + message=message, + ) + for item in models: + # resolve model working folder and name file + path, name_file = os.path.split(item[1]) + model_obj = PackageContainer.model_factory(item[0][:-1].lower()) + # load model + if verbosity_level.value >= VerbosityLevel.normal.value: + print(f" loading model {item[0].lower()}...") + instance._models[item[2]] = model_obj.load( + instance, + instance.structure.model_struct_objs[item[0].lower()], + item[2], + name_file, + version, + exe_name, + strict, + path, + load_only, + ) + + # load exchange packages and dependent packages + try: + exchange_recarray = instance.name_file.exchanges + has_exch_data = exchange_recarray.has_data() + except MFDataException as mfde: + message = ( + "Error occurred while loading exchange names from the " + "simulation name file." + ) + raise MFDataException( + mfdata_except=mfde, + model=instance.name, + package="nam", + message=message, + ) + if has_exch_data: + try: + exch_data = exchange_recarray.get_data() + except MFDataException as mfde: + message = ( + "Error occurred while loading exchange names from " + "the simulation name file." + ) + raise MFDataException( + mfdata_except=mfde, + model=instance.name, + package="nam", + message=message, + ) + for exgfile in exch_data: + if load_only is not None and not instance._in_pkg_list( + load_only, exgfile[0], exgfile[2] + ): + if ( + instance.simulation_data.verbosity_level.value + >= VerbosityLevel.normal.value + ): + print(f" skipping package {exgfile[0].lower()}...") + continue + # get exchange type by removing numbers from exgtype + exchange_type = "".join( + [char for char in exgfile[0] if not char.isdigit()] + ).upper() + # get exchange number for this type + if exchange_type not in instance._exg_file_num: + exchange_file_num = 0 + instance._exg_file_num[exchange_type] = 1 + else: + exchange_file_num = instance._exg_file_num[exchange_type] + instance._exg_file_num[exchange_type] += 1 + + exchange_name = f"{exchange_type}_EXG_{exchange_file_num}" + # find package class the corresponds to this exchange type + package_obj = instance.package_factory( + exchange_type.replace("-", "").lower(), "" + ) + if not package_obj: + message = ( + "An error occurred while loading the " + "simulation name file. Invalid exchange type " + '"{}" specified.'.format(exchange_type) + ) + type_, value_, traceback_ = sys.exc_info() + raise MFDataException( + instance.name, + "nam", + "nam", + "loading simulation name file", + exchange_recarray.structure.name, + inspect.stack()[0][3], + type_, + value_, + traceback_, + message, + instance._simulation_data.debug, + ) + + # build and load exchange package object + exchange_file = package_obj( + instance, + exgtype=exgfile[0], + exgmnamea=exgfile[2], + exgmnameb=exgfile[3], + filename=exgfile[1], + pname=exchange_name, + loading_package=True, + ) + if verbosity_level.value >= VerbosityLevel.normal.value: + print( + f" loading exchange package {exchange_file._get_pname()}..." + ) + exchange_file.load(strict) + instance._exchange_files[exgfile[1]] = exchange_file + + # load simulation packages + solution_recarray = instance.simulation_data.mfdata[ + ("nam", "solutiongroup", "solutiongroup") + ] + + try: + solution_group_dict = solution_recarray.get_data() + except MFDataException as mfde: + message = ( + "Error occurred while loading solution groups from " + "the simulation name file." + ) + raise MFDataException( + mfdata_except=mfde, + model=instance.name, + package="nam", + message=message, + ) + for solution_group in solution_group_dict.values(): + for solution_info in solution_group: + if load_only is not None and not instance._in_pkg_list( + load_only, solution_info[0], solution_info[2] + ): + if ( + instance.simulation_data.verbosity_level.value + >= VerbosityLevel.normal.value + ): + print( + f" skipping package {solution_info[0].lower()}..." + ) + continue + # create solution package + sln_package_obj = instance.package_factory( + solution_info[0][:-1].lower(), "" + ) + sln_package = sln_package_obj( + instance, + filename=solution_info[1], + pname=solution_info[2], + ) + + if verbosity_level.value >= VerbosityLevel.normal.value: + print( + f" loading solution package " + f"{sln_package._get_pname()}..." + ) + sln_package.load(strict) + + instance.simulation_data.mfpath.set_last_accessed_path() + if verify_data: + instance.check() + return instance + + def check( + self, + f: Optional[Union[str, os.PathLike]] = None, + verbose=True, + level=1, + ): + """ + Check model data for common errors. + + Parameters + ---------- + f : str or PathLike, optional + String defining file name or file handle for summary file + of check method output. If str or pathlike, a file handle + is created. If None, the method does not write results to + a summary file. (default is None) + verbose : bool + Boolean flag used to determine if check method results are + written to the screen + level : int + Check method analysis level. If level=0, summary checks are + performed. If level=1, full checks are performed. + + Returns + ------- + check list: list + Python list containing simulation check results + + Examples + -------- + + >>> import flopy + >>> m = flopy.modflow.Modflow.load('model.nam') + >>> m.check() + """ + # check instance for simulation-level check + chk_list = [] + + # check models + for model in self._models.values(): + print(f'Checking model "{model.name}"...') + chk_list.append(model.check(f, verbose, level)) + + print("Checking for missing simulation packages...") + if self._tdis_file is None: + if chk_list: + chk_list[0]._add_to_summary( + "Error", desc="\r No tdis package", package="model" + ) + print("Error: no tdis package") + if len(self._solution_files) == 0: + if chk_list: + chk_list[0]._add_to_summary( + "Error", desc="\r No solver package", package="model" + ) + print("Error: no solution package") + return chk_list + + @property + def sim_path(self) -> Path: + return Path(self.simulation_data.mfpath.get_sim_path()) + + @property + def sim_package_list(self): + """List of all "simulation level" packages""" + package_list = [] + if self._tdis_file is not None: + package_list.append(self._tdis_file) + for sim_package in self._solution_files.values(): + package_list.append(sim_package) + for sim_package in self._exchange_files.values(): + package_list.append(sim_package) + for sim_package in self._other_files.values(): + package_list.append(sim_package) + return package_list + + def load_package( + self, + ftype, + fname: Union[str, os.PathLike], + pname, + strict, + ref_path: Union[str, os.PathLike], + dict_package_name=None, + parent_package=None, + ): + """ + Load a package from a file. + + Parameters + ---------- + ftype : str + the file type + fname : str or PathLike + the path of the file containing the package input + pname : str + the user-defined name for the package + strict : bool + strict mode when loading the file + ref_path : str + path to the file. uses local path if set to None + dict_package_name : str + package name for dictionary lookup + parent_package : MFPackage + parent package + + """ + if ( + ftype in self.structure.package_struct_objs + and self.structure.package_struct_objs[ftype].multi_package_support + ) or ( + ftype in self.structure.utl_struct_objs + and self.structure.utl_struct_objs[ftype].multi_package_support + ): + # resolve dictionary name for package + if dict_package_name is not None: + if parent_package is not None: + dict_package_name = f"{parent_package.path[-1]}_{ftype}" + else: + # use dict_package_name as the base name + if ftype in self._ftype_num_dict: + self._ftype_num_dict[dict_package_name] += 1 + else: + self._ftype_num_dict[dict_package_name] = 0 + dict_package_name = "{}_{}".format( + dict_package_name, + self._ftype_num_dict[dict_package_name], + ) + else: + # use ftype as the base name + if ftype in self._ftype_num_dict: + self._ftype_num_dict[ftype] += 1 + else: + self._ftype_num_dict[ftype] = 0 + if pname is not None: + dict_package_name = pname + else: + dict_package_name = ( + f"{ftype}_{self._ftype_num_dict[ftype]}" + ) + else: + dict_package_name = ftype + + # get package object from file type + package_obj = self.package_factory(ftype, "") + # create package + package = package_obj( + self, + filename=fname, + pname=dict_package_name, + parent_file=parent_package, + loading_package=True, + ) + package.load(strict) + self._other_files[package.filename] = package + # register child package with the simulation + self._add_package(package, package.path) + if parent_package is not None: + # register child package with the parent package + parent_package._add_package(package, package.path) + return package + + def register_ims_package( + self, solution_file: MFPackage, model_list: Union[str, List[str]] + ): + self.register_solution_package(solution_file, model_list) + + def register_solution_package( + self, solution_file: MFPackage, model_list: Union[str, List[str]] + ): + """ + Register a solution package with the simulation. + + Parameters + solution_file : MFPackage + solution package to register + model_list : list of strings + list of models using the solution package to be registered + + """ + if isinstance(model_list, str): + model_list = [model_list] + + if ( + solution_file.package_type + not in mfstructure.MFStructure().flopy_dict["solution_packages"] + ): + comment = ( + 'Parameter "solution_file" is not a valid solution file. ' + 'Expected solution file, but type "{}" was given' + ".".format(type(solution_file)) + ) + type_, value_, traceback_ = sys.exc_info() + raise MFDataException( + None, + "solution", + "", + "registering solution package", + "", + inspect.stack()[0][3], + type_, + value_, + traceback_, + comment, + self.simulation_data.debug, + ) + valid_model_types = mfstructure.MFStructure().flopy_dict[ + "solution_packages" + ][solution_file.package_type] + # remove models from existing solution groups + if model_list is not None: + for model in model_list: + md = self.get_model(model) + if md is not None and ( + md.model_type not in valid_model_types + and "*" not in valid_model_types + ): + comment = ( + f"Model type {md.model_type} is not a valid type " + f"for solution file {solution_file.filename} solution " + f"file type {solution_file.package_type}. Valid model " + f"types are {valid_model_types}" + ) + type_, value_, traceback_ = sys.exc_info() + raise MFDataException( + None, + "solution", + "", + "registering solution package", + "", + inspect.stack()[0][3], + type_, + value_, + traceback_, + comment, + self.simulation_data.debug, + ) + self._remove_from_all_solution_groups(model) + + # register solution package with model list + in_simulation = False + pkg_with_same_name = None + for file in self._solution_files.values(): + if file is solution_file: + in_simulation = True + if ( + file.package_name == solution_file.package_name + and file != solution_file + ): + pkg_with_same_name = file + if ( + self.simulation_data.verbosity_level.value + >= VerbosityLevel.normal.value + ): + print( + "WARNING: solution package with name {} already exists. " + "New solution package will replace old package" + ".".format(file.package_name) + ) + self._remove_package(self._solution_files[file.filename]) + del self._solution_files[file.filename] + break + # register solution package + if not in_simulation: + self._add_package( + solution_file, self._get_package_path(solution_file) + ) + # do not allow a solution package to be registered twice with the + # same simulation + if not in_simulation: + # create unique file/package name + if solution_file.package_name is None: + file_num = len(self._solution_files) - 1 + solution_file.package_name = ( + f"{solution_file.package_type}_{file_num}" + ) + if solution_file.filename in self._solution_files: + solution_file.filename = MFFileMgmt.unique_file_name( + solution_file.filename, self._solution_files + ) + # add solution package to simulation + self._solution_files[solution_file.filename] = solution_file + + # If solution file is being replaced, replace solution filename in + # solution group + if pkg_with_same_name is not None and self._is_in_solution_group( + pkg_with_same_name.filename, 1 + ): + # change existing solution group to reflect new solution file + self._replace_solution_in_solution_group( + pkg_with_same_name.filename, 1, solution_file.filename + ) + # only allow solution package to be registered to one solution group + elif model_list is not None: + sln_file_in_group = self._is_in_solution_group( + solution_file.filename, 1 + ) + # add solution group to the simulation name file + solution_recarray = self.name_file.solutiongroup + solution_group_list = solution_recarray.get_active_key_list() + if len(solution_group_list) == 0: + solution_group_num = 0 + else: + solution_group_num = solution_group_list[-1][0] + + if sln_file_in_group: + self._append_to_solution_group( + solution_file.filename, model_list + ) + else: + if self.name_file.mxiter.get_data(solution_group_num) is None: + self.name_file.mxiter.add_transient_key(solution_group_num) + + # associate any models in the model list to this + # simulation file + version_string = mfstructure.MFStructure().get_version_string() + solution_pkg = f"{solution_file.package_abbr}{version_string}" + new_record = [solution_pkg, solution_file.filename] + for model in model_list: + new_record.append(model) + try: + solution_recarray.append_list_as_record( + new_record, solution_group_num + ) + except MFDataException as mfde: + message = ( + "Error occurred while updating the " + "simulation name file with the solution package " + 'file "{}".'.format(solution_file.filename) + ) + raise MFDataException( + mfdata_except=mfde, package="nam", message=message + ) + + @staticmethod + def _rename_package_group(group_dict, name): + package_type_count = {} + # first build an array to avoid key modification errors + package_array = [] + for package in group_dict.values(): + package_array.append(package) + # update package file names and count + for package in package_array: + if package.package_type not in package_type_count: + file_name = f"{name}.{package.package_type}" + package_type_count[package.package_type] = 1 + else: + package_type_count[package.package_type] += 1 + ptc = package_type_count[package.package_type] + file_name = f"{name}_{ptc}.{package.package_type}" + base_filepath = os.path.split(package.filename)[0] + if base_filepath != "": + # include existing relative path in new file name + file_name = os.path.join(base_filepath, file_name) + package.filename = file_name + + def _rename_exchange_file(self, package, new_filename): + self._exchange_files[package.filename] = package + try: + exchange_recarray_data = self.name_file.exchanges.get_data() + except MFDataException as mfde: + message = ( + "An error occurred while retrieving exchange " + "data from the simulation name file. The error " + "occurred while registering exchange file " + f'"{package.filename}".' + ) + raise MFDataException( + mfdata_except=mfde, + package=package._get_pname(), + message=message, + ) + if exchange_recarray_data is not None: + for index, exchange in zip( + range(0, len(exchange_recarray_data)), + exchange_recarray_data, + ): + if exchange[1] == package.filename: + # update existing exchange + exchange_recarray_data[index][1] = new_filename + ex_recarray = self.name_file.exchanges + try: + ex_recarray.set_data(exchange_recarray_data) + except MFDataException as mfde: + message = ( + "An error occurred while setting " + "exchange data in the simulation name " + "file. The error occurred while " + "registering the following " + "values (exgtype, filename, " + f'exgmnamea, exgmnameb): "{package.exgtype} ' + f"{package.filename} {package.exgmnamea}" + f'{package.exgmnameb}".' + ) + raise MFDataException( + mfdata_except=mfde, + package=package._get_pname(), + message=message, + ) + return + + def _set_timing_block(self, file_name): + struct_root = mfstructure.MFStructure() + tdis_pkg = f"tdis{struct_root.get_version_string()}" + tdis_attr = getattr(self.name_file, tdis_pkg) + try: + tdis_attr.set_data(file_name) + except MFDataException as mfde: + message = ( + "An error occurred while setting the tdis package " + f'file name "{file_name}". The error occurred while ' + "registering the tdis package with the " + "simulation" + ) + raise MFDataException( + mfdata_except=mfde, + package=file_name, + message=message, + ) + + def update_package_filename(self, package, new_name): + """ + Updates internal arrays to be consistent with a new file name. + This is for internal flopy library use only. + + Parameters + ---------- + package: MFPackage + Package with new name + new_name: str + Package's new name + + """ + if ( + self._tdis_file is not None + and package.filename == self._tdis_file.filename + ): + self._set_timing_block(new_name) + elif package.filename in self._exchange_files: + self._exchange_files[new_name] = self._exchange_files.pop( + package.filename + ) + self._rename_exchange_file(package, new_name) + elif package.filename in self._solution_files: + self._solution_files[new_name] = self._solution_files.pop( + package.filename + ) + self._update_solution_group(package.filename, new_name) + else: + self._other_files[new_name] = self._other_files.pop( + package.filename + ) + + def rename_all_packages(self, name): + """ + Rename all packages with name as prefix. + + Parameters + ---------- + name: str + Prefix of package names + + """ + if self._tdis_file is not None: + self._tdis_file.filename = f"{name}.{self._tdis_file.package_type}" + + self._rename_package_group(self._exchange_files, name) + self._rename_package_group(self._solution_files, name) + self._rename_package_group(self._other_files, name) + for model in self._models.values(): + model.rename_all_packages(name) + + def set_all_data_external( + self, check_data=True, external_data_folder=None + ): + """Sets the simulation's list and array data to be stored externally. + + Parameters + ---------- + check_data: bool + Determines if data error checking is enabled during this + process. Data error checking can be slow on large datasets. + external_data_folder: str or PathLike + Path relative to the simulation path or model relative path + (see use_model_relative_path parameter), where external data + will be stored + """ + # copy any files whose paths have changed + self.simulation_data.mfpath.copy_files() + # set data external for all packages in all models + for model in self._models.values(): + model.set_all_data_external(check_data, external_data_folder) + # set data external for solution packages + for package in self._solution_files.values(): + package.set_all_data_external(check_data, external_data_folder) + # set data external for other packages + for package in self._other_files.values(): + package.set_all_data_external(check_data, external_data_folder) + for package in self._exchange_files.values(): + package.set_all_data_external(check_data, external_data_folder) + + def set_all_data_internal(self, check_data=True): + # set data external for all packages in all models + for model in self._models.values(): + model.set_all_data_internal(check_data) + # set data external for solution packages + for package in self._solution_files.values(): + package.set_all_data_internal(check_data) + # set data external for other packages + for package in self._other_files.values(): + package.set_all_data_internal(check_data) + # set data external for exchange packages + for package in self._exchange_files.values(): + package.set_all_data_internal(check_data) + + def write_simulation( + self, ext_file_action=ExtFileAction.copy_relative_paths, silent=False + ): + """ + Write the simulation to files. + + Parameters + ext_file_action : ExtFileAction + Defines what to do with external files when the simulation + path has changed. Defaults to copy_relative_paths which + copies only files with relative paths, leaving files defined + by absolute paths fixed. + silent : bool + Writes out the simulation in silent mode (verbosity_level = 0) + + """ + sim_data = self.simulation_data + if not sim_data.max_columns_user_set: + # search for dis packages + for model in self._models.values(): + dis = model.get_package("dis") + if dis is not None and hasattr(dis, "ncol"): + sim_data.max_columns_of_data = dis.ncol.get_data() + sim_data.max_columns_user_set = False + sim_data.max_columns_auto_set = True + + saved_verb_lvl = self.simulation_data.verbosity_level + if silent: + self.simulation_data.verbosity_level = VerbosityLevel.quiet + + # write simulation name file + if ( + self.simulation_data.verbosity_level.value + >= VerbosityLevel.normal.value + ): + print("writing simulation...") + print(" writing simulation name file...") + self.name_file.write(ext_file_action=ext_file_action) + + # write TDIS file + if ( + self.simulation_data.verbosity_level.value + >= VerbosityLevel.normal.value + ): + print(" writing simulation tdis package...") + self._tdis_file.write(ext_file_action=ext_file_action) + + # write solution files + for solution_file in self._solution_files.values(): + if ( + self.simulation_data.verbosity_level.value + >= VerbosityLevel.normal.value + ): + print( + f" writing solution package " + f"{solution_file._get_pname()}..." + ) + solution_file.write(ext_file_action=ext_file_action) + + # write exchange files + for exchange_file in self._exchange_files.values(): + exchange_file.write() + + # write other packages + for pp in self._other_files.values(): + if ( + self.simulation_data.verbosity_level.value + >= VerbosityLevel.normal.value + ): + print(f" writing package {pp._get_pname()}...") + pp.write(ext_file_action=ext_file_action) + + # FIX: model working folder should be model name file folder + + # write models + for model in self._models.values(): + if ( + self.simulation_data.verbosity_level.value + >= VerbosityLevel.normal.value + ): + print(f" writing model {model.name}...") + model.write(ext_file_action=ext_file_action) + + self.simulation_data.mfpath.set_last_accessed_path() + + if silent: + self.simulation_data.verbosity_level = saved_verb_lvl + + def set_sim_path(self, path: Union[str, os.PathLike]): + """Return a list of output data keys. + + Parameters + ---------- + path : str + Relative or absolute path to simulation root folder. + + """ + # set all data internal + self.set_all_data_internal() + + # set simulation path + self.simulation_data.mfpath.set_sim_path(path, True) + + if not os.path.exists(path): + # create new simulation folder + os.makedirs(path) + + def run_simulation( + self, + silent=None, + pause=False, + report=False, + processors=None, + normal_msg="normal termination", + use_async=False, + cargs=None, + ): + """ + Run the simulation. + + Parameters + ---------- + silent: bool + Run in silent mode + pause: bool + Pause at end of run + report: bool + Save stdout lines to a list (buff) + processors: int + Number of processors. Parallel simulations are only supported + for MODFLOW 6 simulations. (default is None) + normal_msg: str or list + Normal termination message used to determine if the run + terminated normally. More than one message can be provided + using a list. (default is 'normal termination') + use_async : bool + Asynchronously read model stdout and report with timestamps. + good for models that take long time to run. not good for + models that run really fast + cargs : str or list of strings + Additional command line arguments to pass to the executable. + default is None + + Returns + -------- + success : bool + buff : list of lines of stdout + + """ + if silent is None: + if ( + self.simulation_data.verbosity_level.value + >= VerbosityLevel.normal.value + ): + silent = False + else: + silent = True + return run_model( + self.exe_name, + None, + self.simulation_data.mfpath.get_sim_path(), + silent=silent, + pause=pause, + report=report, + processors=processors, + normal_msg=normal_msg, + use_async=use_async, + cargs=cargs, + ) + + def delete_output_files(self): + """Deletes simulation output files.""" + output_req = binaryfile_utils.MFOutputRequester + output_file_keys = output_req.getkeys( + self.simulation_data.mfdata, self.simulation_data.mfpath, False + ) + for path in output_file_keys.binarypathdict.values(): + if os.path.isfile(path): + os.remove(path) + + def remove_package(self, package_name): + """ + Removes package from the simulation. `package_name` can be the + package's name, type, or package object to be removed from the model. + + Parameters + ---------- + package_name : str + Name of package to be removed + + """ + if isinstance(package_name, MFPackage): + packages = [package_name] + else: + packages = self.get_package(package_name) + if not isinstance(packages, list): + packages = [packages] + for package in packages: + if ( + self._tdis_file is not None + and package.path == self._tdis_file.path + ): + self._tdis_file = None + if package.filename in self._exchange_files: + del self._exchange_files[package.filename] + if package.filename in self._solution_files: + del self._solution_files[package.filename] + self._update_solution_group(package.filename) + if package.filename in self._other_files: + del self._other_files[package.filename] + + self._remove_package(package) + + @property + def model_dict(self): + """ + Return a dictionary of models associated with this simulation. + + Returns + -------- + model dict : dict + dictionary of models + + """ + return self._models.copy() + + def get_model(self, model_name=None): + """ + Returns the models in the simulation with a given model name, name + file name, or model type. + + Parameters + ---------- + model_name : str + Name of the model to get. Passing in None or an empty list + will get the first model. + + Returns + -------- + model : MFModel + + """ + if len(self._models) == 0: + return None + + if model_name is None: + for model in self._models.values(): + return model + if model_name in self._models: + return self._models[model_name] + # do case-insensitive lookup + for name, model in self._models.items(): + if model_name.lower() == name.lower(): + return model + return None + + def get_exchange_file(self, filename): + """ + Get a specified exchange file. + + Parameters + ---------- + filename : str + Name of exchange file to get + + Returns + -------- + exchange package : MFPackage + + """ + if filename in self._exchange_files: + return self._exchange_files[filename] + else: + excpt_str = f'Exchange file "{filename}" can not be found.' + raise FlopyException(excpt_str) + + def get_file(self, filename): + """ + Get a specified file. + + Parameters + ---------- + filename : str + Name of mover file to get + + Returns + -------- + mover package : MFPackage + + """ + if filename in self._other_files: + return self._other_files[filename] + else: + excpt_str = f'file "{filename}" can not be found.' + raise FlopyException(excpt_str) + + def get_mvr_file(self, filename): + """ + Get a specified mover file. + + Parameters + ---------- + filename : str + Name of mover file to get + + Returns + -------- + mover package : MFPackage + + """ + warnings.warn( + "get_mvr_file will be deprecated and will be removed in version " + "3.3.6. Use get_file", + PendingDeprecationWarning, + ) + if filename in self._other_files: + return self._other_files[filename] + else: + excpt_str = f'MVR file "{filename}" can not be found.' + raise FlopyException(excpt_str) + + def get_mvt_file(self, filename): + """ + Get a specified mvt file. + + Parameters + ---------- + filename : str + Name of mover transport file to get + + Returns + -------- + mover transport package : MFPackage + + """ + warnings.warn( + "get_mvt_file will be deprecated and will be removed in version " + "3.3.6. Use get_file", + PendingDeprecationWarning, + ) + if filename in self._other_files: + return self._other_files[filename] + else: + excpt_str = f'MVT file "{filename}" can not be found.' + raise FlopyException(excpt_str) + + def get_gnc_file(self, filename): + """ + Get a specified gnc file. + + Parameters + ---------- + filename : str + Name of gnc file to get + + Returns + -------- + gnc package : MFPackage + + """ + warnings.warn( + "get_gnc_file will be deprecated and will be removed in version " + "3.3.6. Use get_file", + PendingDeprecationWarning, + ) + if filename in self._other_files: + return self._other_files[filename] + else: + excpt_str = f'GNC file "{filename}" can not be found.' + raise FlopyException(excpt_str) + + def remove_exchange_file(self, package): + """ + Removes the exchange file "package". This is for internal flopy + library use only. + + Parameters + ---------- + package: MFPackage + Exchange package to be removed + + """ + self._exchange_files[package.filename] = package + try: + exchange_recarray_data = self.name_file.exchanges.get_data() + except MFDataException as mfde: + message = ( + "An error occurred while retrieving exchange " + "data from the simulation name file. The error " + "occurred while registering exchange file " + f'"{package.filename}".' + ) + raise MFDataException( + mfdata_except=mfde, + package=package._get_pname(), + message=message, + ) + remove_indices = [] + if exchange_recarray_data is not None: + for index, exchange in zip( + range(0, len(exchange_recarray_data)), + exchange_recarray_data, + ): + if ( + package.filename is not None + and exchange[1] == package.filename + ): + remove_indices.append(index) + if len(remove_indices) > 0: + self.name_file.exchanges.set_data( + np.delete(exchange_recarray_data, remove_indices) + ) + + def register_exchange_file(self, package): + """ + Register an exchange package file with the simulation. This is a + call-back method made from the package and should not be called + directly. + + Parameters + ---------- + package : MFPackage + Exchange package object to register + + """ + if package.filename not in self._exchange_files: + exgtype = package.exgtype + exgmnamea = package.exgmnamea + exgmnameb = package.exgmnameb + + if exgtype is None or exgmnamea is None or exgmnameb is None: + excpt_str = ( + "Exchange packages require that exgtype, " + "exgmnamea, and exgmnameb are specified." + ) + raise FlopyException(excpt_str) + + self._exchange_files[package.filename] = package + try: + exchange_recarray_data = self.name_file.exchanges.get_data() + except MFDataException as mfde: + message = ( + "An error occurred while retrieving exchange " + "data from the simulation name file. The error " + "occurred while registering exchange file " + f'"{package.filename}".' + ) + raise MFDataException( + mfdata_except=mfde, + package=package._get_pname(), + message=message, + ) + if exchange_recarray_data is not None: + for index, exchange in zip( + range(0, len(exchange_recarray_data)), + exchange_recarray_data, + ): + if exchange[1] == package.filename: + # update existing exchange + exchange_recarray_data[index][0] = exgtype + exchange_recarray_data[index][2] = exgmnamea + exchange_recarray_data[index][3] = exgmnameb + ex_recarray = self.name_file.exchanges + try: + ex_recarray.set_data(exchange_recarray_data) + except MFDataException as mfde: + message = ( + "An error occurred while setting " + "exchange data in the simulation name " + "file. The error occurred while " + "registering the following " + "values (exgtype, filename, " + f'exgmnamea, exgmnameb): "{exgtype} ' + f"{package.filename} {exgmnamea}" + f'{exgmnameb}".' + ) + raise MFDataException( + mfdata_except=mfde, + package=package._get_pname(), + message=message, + ) + return + try: + # add new exchange + self.name_file.exchanges.append_data( + [(exgtype, package.filename, exgmnamea, exgmnameb)] + ) + except MFDataException as mfde: + message = ( + "An error occurred while setting exchange data " + "in the simulation name file. The error occurred " + "while registering the following values (exgtype, " + f'filename, exgmnamea, exgmnameb): "{exgtype} ' + f'{package.filename} {exgmnamea} {exgmnameb}".' + ) + raise MFDataException( + mfdata_except=mfde, + package=package._get_pname(), + message=message, + ) + if ( + package.dimensions is None + or package.dimensions.model_dim is None + ): + # resolve exchange package dimensions object + package.dimensions = package.create_package_dimensions() + + def _remove_package_by_type(self, package): + pname = None + if package.package_name is not None: + pname = package.package_name.lower() + if ( + package.package_type.lower() == "tdis" + and self._tdis_file is not None + and self._tdis_file in self._packagelist + ): + # tdis package already exists. there can be only one tdis + # package. remove existing tdis package + if ( + self.simulation_data.verbosity_level.value + >= VerbosityLevel.normal.value + ): + print( + "WARNING: tdis package already exists. Replacing " + "existing tdis package." + ) + self._remove_package(self._tdis_file) + elif ( + package.package_type.lower() + in mfstructure.MFStructure().flopy_dict["solution_packages"] + and pname in self.package_name_dict + ): + if ( + self.simulation_data.verbosity_level.value + >= VerbosityLevel.normal.value + ): + print( + "WARNING: Package with name " + f"{package.package_name.lower()} already exists. " + "Replacing existing package." + ) + self._remove_package(self.package_name_dict[pname]) + else: + if ( + package.filename in self._other_files + and self._other_files[package.filename] in self._packagelist + ): + # other package with same file name already exists. remove old + # package + if ( + self.simulation_data.verbosity_level.value + >= VerbosityLevel.normal.value + ): + print( + f"WARNING: package with name {pname} already exists. " + "Replacing existing package." + ) + self._remove_package(self._other_files[package.filename]) + del self._other_files[package.filename] + + def register_package( + self, + package, + add_to_package_list=True, + set_package_name=True, + set_package_filename=True, + ): + """ + Register a package file with the simulation. This is a + call-back method made from the package and should not be called + directly. + + Parameters + ---------- + package : MFPackage + Package to register + add_to_package_list : bool + Add package to lookup list + set_package_name : bool + Produce a package name for this package + set_package_filename : bool + Produce a filename for this package + + Returns + -------- + (path : tuple, package structure : MFPackageStructure) + + """ + if set_package_filename: + # set initial package filename + base_name = os.path.basename(os.path.normpath(self.name)) + package._filename = f"{base_name}.{package.package_type}" + + package.container_type = [PackageContainerType.simulation] + path = self._get_package_path(package) + if add_to_package_list and package.package_type.lower != "nam": + if ( + package.package_type.lower() + not in mfstructure.MFStructure().flopy_dict[ + "solution_packages" + ] + ): + # all but solution packages get added here. solution packages + # are added during solution package registration + self._remove_package_by_type(package) + self._add_package(package, path) + sln_dict = mfstructure.MFStructure().flopy_dict["solution_packages"] + if package.package_type.lower() == "nam": + if not package.internal_package: + excpt_str = ( + "Unable to register nam file. Do not create your own nam " + "files. Nam files are automatically created and managed " + "for you by FloPy." + ) + print(excpt_str) + raise FlopyException(excpt_str) + return path, self.structure.name_file_struct_obj + elif package.package_type.lower() == "tdis": + self._tdis_file = package + self._set_timing_block(package.quoted_filename) + return ( + path, + self.structure.package_struct_objs[ + package.package_type.lower() + ], + ) + elif package.package_type.lower() in sln_dict: + supported_packages = sln_dict[package.package_type.lower()] + # default behavior is to register the solution package with the + # first unregistered model + unregistered_models = [] + for model_name, model in self._models.items(): + model_registered = self._is_in_solution_group( + model_name, 2, True + ) + if not model_registered and ( + model.model_type in supported_packages + or "*" in supported_packages + ): + unregistered_models.append(model_name) + if unregistered_models: + self.register_solution_package(package, unregistered_models) + else: + self.register_solution_package(package, None) + return ( + path, + self.structure.package_struct_objs[ + package.package_type.lower() + ], + ) + else: + if package.filename not in self._other_files: + self._other_files[package.filename] = package + else: + # auto generate a unique file name and register it + file_name = MFFileMgmt.unique_file_name( + package.filename, self._other_files + ) + package.filename = file_name + self._other_files[file_name] = package + + if package.package_type.lower() in self.structure.package_struct_objs: + return ( + path, + self.structure.package_struct_objs[ + package.package_type.lower() + ], + ) + elif package.package_type.lower() in self.structure.utl_struct_objs: + return ( + path, + self.structure.utl_struct_objs[package.package_type.lower()], + ) + else: + excpt_str = ( + 'Invalid package type "{}". Unable to register ' + "package.".format(package.package_type) + ) + print(excpt_str) + raise FlopyException(excpt_str) + + def rename_model_namefile(self, model, new_namefile): + """ + Rename a model's namefile. For internal flopy library use only. + + Parameters + ---------- + model : MFModel + Model object whose namefile to rename + new_namefile : str + Name of the new namefile + + """ + # update simulation name file + models = self.name_file.models.get_data() + for mdl in models: + path, name_file_name = os.path.split(mdl[1]) + if name_file_name == model.name_file.filename: + mdl[1] = os.path.join(path, new_namefile) + self.name_file.models.set_data(models) + + def register_model(self, model, model_type, model_name, model_namefile): + """ + Add a model to the simulation. This is a call-back method made + from the package and should not be called directly. + + Parameters + ---------- + model : MFModel + Model object to add to simulation + sln_group : str + Solution group of model + + Returns + -------- + model_structure_object : MFModelStructure + """ + + # get model structure from model type + if model_type not in self.structure.model_struct_objs: + message = f'Invalid model type: "{model_type}".' + type_, value_, traceback_ = sys.exc_info() + raise MFDataException( + model.name, + "", + model.name, + "registering model", + "sim", + inspect.stack()[0][3], + type_, + value_, + traceback_, + message, + self.simulation_data.debug, + ) + + # add model + self._models[model_name] = model + + # update simulation name file + self.name_file.models.append_list_as_record( + [model_type, model_namefile, model_name] + ) + + if len(self._solution_files) > 0: + # register model with first solution file found + first_solution_key = next(iter(self._solution_files)) + self.register_solution_package( + self._solution_files[first_solution_key], model_name + ) + + return self.structure.model_struct_objs[model_type] + + def get_ims_package(self, key): + warnings.warn( + "get_ims_package() has been deprecated and will be " + "removed in version 3.3.7. Use " + "get_solution_package() instead.", + DeprecationWarning, + ) + return self.get_solution_package(key) + + def get_solution_package(self, key): + """ + Get the solution package with the specified `key`. + + Parameters + ---------- + key : str + solution package file name + + Returns + -------- + solution_package : MFPackage + + """ + if key in self._solution_files: + return self._solution_files[key] + return None + + def remove_model(self, model_name): + """ + Remove model with name `model_name` from the simulation + + Parameters + ---------- + model_name : str + Model name to remove from simulation + + """ + # Remove model + del self._models[model_name] + + # TODO: Fully implement this + # Update simulation name file + + def is_valid(self): + """ + Checks the validity of the solution and all of its models and + packages. Returns true if the solution is valid, false if it is not. + + + Returns + -------- + valid : bool + Whether this is a valid simulation + + """ + # name file valid + if not self.name_file.is_valid(): + return False + + # tdis file valid + if not self._tdis_file.is_valid(): + return False + + # exchanges valid + for exchange in self._exchange_files: + if not exchange.is_valid(): + return False + + # solution files valid + for solution_file in self._solution_files.values(): + if not solution_file.is_valid(): + return False + + # a model exists + if not self._models: + return False + + # models valid + for key in self._models: + if not self._models[key].is_valid(): + return False + + # each model has a solution file + + return True + + @staticmethod + def _resolve_verbosity_level(verbosity_level): + if verbosity_level == 0: + return VerbosityLevel.quiet + elif verbosity_level == 1: + return VerbosityLevel.normal + elif verbosity_level == 2: + return VerbosityLevel.verbose + else: + return verbosity_level + + @staticmethod + def _get_package_path(package): + if package.parent_file is not None: + return (package.parent_file.path) + (package.package_type,) + else: + return (package.package_type,) + + def _update_solution_group(self, solution_file, new_name=None): + solution_recarray = self.name_file.solutiongroup + for solution_group_num in solution_recarray.get_active_key_list(): + try: + rec_array = solution_recarray.get_data(solution_group_num[0]) + except MFDataException as mfde: + message = ( + "An error occurred while getting solution group" + '"{}" from the simulation name file' + ".".format(solution_group_num[0]) + ) + raise MFDataException( + mfdata_except=mfde, package="nam", message=message + ) + + new_array = [] + for record in rec_array: + if record.slnfname == solution_file: + if new_name is not None: + record.slnfname = new_name + new_array.append(tuple(record)) + else: + continue + else: + new_array.append(record) + + if not new_array: + new_array = None + + solution_recarray.set_data(new_array, solution_group_num[0]) + + def _remove_from_all_solution_groups(self, modelname): + solution_recarray = self.name_file.solutiongroup + for solution_group_num in solution_recarray.get_active_key_list(): + try: + rec_array = solution_recarray.get_data(solution_group_num[0]) + except MFDataException as mfde: + message = ( + "An error occurred while getting solution group" + '"{}" from the simulation name file' + ".".format(solution_group_num[0]) + ) + raise MFDataException( + mfdata_except=mfde, package="nam", message=message + ) + new_array = ["no_check"] + for index, record in enumerate(rec_array): + new_record = [] + new_record.append(record[0]) + new_record.append(record[1]) + for item in list(record)[2:]: + if item is not None and item.lower() != modelname.lower(): + new_record.append(item) + new_array.append(tuple(new_record)) + solution_recarray.set_data(new_array, solution_group_num[0]) + + def _append_to_solution_group(self, solution_file, new_models): + # clear models out of solution groups + if new_models is not None: + for model in new_models: + self._remove_from_all_solution_groups(model) + + # append models to solution_file + solution_recarray = self.name_file.solutiongroup + for solution_group_num in solution_recarray.get_active_key_list(): + try: + rec_array = solution_recarray.get_data(solution_group_num[0]) + except MFDataException as mfde: + message = ( + "An error occurred while getting solution group" + '"{}" from the simulation name file' + ".".format(solution_group_num[0]) + ) + raise MFDataException( + mfdata_except=mfde, package="nam", message=message + ) + new_array = [] + for index, record in enumerate(rec_array): + new_record = [] + rec_model_dict = {} + for index, item in enumerate(record): + if ( + record[1] == solution_file or item not in new_models + ) and item is not None: + new_record.append(item) + if index > 1 and item is not None: + rec_model_dict[item.lower()] = 1 + + if record[1] == solution_file: + for model in new_models: + if model.lower() not in rec_model_dict: + new_record.append(model) + + new_array.append(tuple(new_record)) + solution_recarray.set_data(new_array, solution_group_num[0]) + + def _replace_solution_in_solution_group(self, item, index, new_item): + solution_recarray = self.name_file.solutiongroup + for solution_group_num in solution_recarray.get_active_key_list(): + try: + rec_array = solution_recarray.get_data(solution_group_num[0]) + except MFDataException as mfde: + message = ( + "An error occurred while getting solution group" + '"{}" from the simulation name file. The error ' + 'occurred while replacing solution file "{}" with "{}"' + 'at index "{}"'.format( + solution_group_num[0], item, new_item, index + ) + ) + raise MFDataException( + mfdata_except=mfde, package="nam", message=message + ) + if rec_array is not None: + for rec_item in rec_array: + if rec_item[index] == item: + rec_item[index] = new_item + + def _is_in_solution_group(self, item, index, any_idx_after=False): + solution_recarray = self.name_file.solutiongroup + for solution_group_num in solution_recarray.get_active_key_list(): + try: + rec_array = solution_recarray.get_data(solution_group_num[0]) + except MFDataException as mfde: + message = ( + "An error occurred while getting solution group" + '"{}" from the simulation name file. The error ' + 'occurred while verifying file "{}" at index "{}" ' + "is in the simulation name file" + ".".format(solution_group_num[0], item, index) + ) + raise MFDataException( + mfdata_except=mfde, package="nam", message=message + ) + + if rec_array is not None: + for rec_item in rec_array: + if any_idx_after: + for idx in range(index, len(rec_item)): + if rec_item[idx] == item: + return True + else: + if rec_item[index] == item: + return True + return False + + def plot( + self, + model_list: Optional[Union[str, List[str]]] = None, + SelPackList=None, + **kwargs, + ): + """ + Plot simulation or models. + + Method to plot a whole simulation or a series of models + that are part of a simulation. + + Parameters + ---------- + model_list: list, optional + List of model names to plot, if none all models will be plotted + SelPackList: list, optional + List of package names to plot, if none all packages will be + plotted + kwargs: + filename_base : str + Base file name that will be used to automatically + generate file names for output image files. Plots will be + exported as image files if file_name_base is not None. + (default is None) + file_extension : str + Valid matplotlib.pyplot file extension for savefig(). + Only used if filename_base is not None. (default is 'png') + mflay : int + MODFLOW zero-based layer number to return. If None, then + all layers will be included. (default is None) + kper : int + MODFLOW zero-based stress period number to return. + (default is zero) + key : str + MFList dictionary key. (default is None) + + Returns + -------- + axes: (list) + matplotlib.pyplot.axes objects + + """ + from ...plot.plotutil import PlotUtilities + + axes = PlotUtilities._plot_simulation_helper( + self, model_list=model_list, SelPackList=SelPackList, **kwargs + ) + return axes diff --git a/flopy/mf6/modflow/mfgwf.py b/flopy/mf6/modflow/mfgwf.py index 9601fb38c4..eb88d83e6a 100644 --- a/flopy/mf6/modflow/mfgwf.py +++ b/flopy/mf6/modflow/mfgwf.py @@ -90,7 +90,6 @@ def __init__( print_flows=None, save_flows=None, newtonoptions=None, - packages=None, **kwargs, ): super().__init__( @@ -109,7 +108,12 @@ def __init__( self.name_file.print_flows.set_data(print_flows) self.name_file.save_flows.set_data(save_flows) self.name_file.newtonoptions.set_data(newtonoptions) - self.name_file.packages.set_data(packages) + + self.list = self.name_file.list + self.print_input = self.name_file.print_input + self.print_flows = self.name_file.print_flows + self.save_flows = self.name_file.save_flows + self.newtonoptions = self.name_file.newtonoptions @classmethod def load( @@ -125,6 +129,7 @@ def load( load_only=None, ): return mfmodel.MFModel.load_base( + cls, simulation, structure, modelname, diff --git a/flopy/mf6/modflow/mfgwt.py b/flopy/mf6/modflow/mfgwt.py index afd4d7f9cf..d182e3d645 100644 --- a/flopy/mf6/modflow/mfgwt.py +++ b/flopy/mf6/modflow/mfgwt.py @@ -84,7 +84,6 @@ def __init__( print_input=None, print_flows=None, save_flows=None, - packages=None, **kwargs, ): super().__init__( @@ -102,7 +101,11 @@ def __init__( self.name_file.print_input.set_data(print_input) self.name_file.print_flows.set_data(print_flows) self.name_file.save_flows.set_data(save_flows) - self.name_file.packages.set_data(packages) + + self.list = self.name_file.list + self.print_input = self.name_file.print_input + self.print_flows = self.name_file.print_flows + self.save_flows = self.name_file.save_flows @classmethod def load( @@ -118,6 +121,7 @@ def load( load_only=None, ): return mfmodel.MFModel.load_base( + cls, simulation, structure, modelname, diff --git a/flopy/mf6/modflow/mfsimulation.py b/flopy/mf6/modflow/mfsimulation.py index 3e9304ffc8..cde15d3a4e 100644 --- a/flopy/mf6/modflow/mfsimulation.py +++ b/flopy/mf6/modflow/mfsimulation.py @@ -1,389 +1,14 @@ -import errno -import inspect -import os.path -import sys -import warnings -from pathlib import Path -from typing import List, Optional, Union +# DO NOT MODIFY THIS FILE DIRECTLY. THIS FILE MUST BE CREATED BY +# mf6/utils/createpackages.py +# FILE created on June 22, 2023 21:13:41 UTC +import os +from typing import Union -import numpy as np +from .. import mfsimbase -from ...mbase import run_model -from ..data import mfstructure -from ..data.mfdatautil import MFComment -from ..data.mfstructure import DatumType -from ..mfbase import ( - ExtFileAction, - FlopyException, - MFDataException, - MFFileMgmt, - PackageContainer, - PackageContainerType, - VerbosityLevel, -) -from ..mfpackage import MFPackage -from ..modflow import mfnam, mftdis -from ..utils import binaryfile_utils, mfobservation - -class SimulationDict(dict): - """ - Class containing custom dictionary for MODFLOW simulations. Dictionary - contains model data. Dictionary keys are "paths" to the data that include - the model and package containing the data. - - Behaves as an dict with some additional features described below. - - Parameters - ---------- - path : MFFileMgmt - Object containing path information for the simulation - - """ - - def __init__(self, path=None): - dict.__init__(self) - self._path = path - - def __getitem__(self, key): - """ - Define the __getitem__ magic method. - - Parameters - ---------- - key (string): Part or all of a dictionary key - - Returns: - MFData or numpy.ndarray - - """ - if key == "_path" or not hasattr(self, "_path"): - raise AttributeError(key) - - # FIX: Transport - Include transport output files - if key[1] in ("CBC", "HDS", "DDN", "UCN"): - val = binaryfile_utils.MFOutput(self, self._path, key) - return val.data - - elif key[-1] == "Observations": - val = mfobservation.MFObservation(self, self._path, key) - return val.data - - if key in self: - val = dict.__getitem__(self, key) - return val - return AttributeError(key) - - def __setitem__(self, key, val): - """ - Define the __setitem__ magic method. - - Parameters - ---------- - key : str - Dictionary key - val : MFData - MFData to store in dictionary - - """ - dict.__setitem__(self, key, val) - - def find_in_path(self, key_path, key_leaf): - """ - Attempt to find key_leaf in a partial key path key_path. - - Parameters - ---------- - key_path : str - partial path to the data - key_leaf : str - name of the data - - Returns - ------- - Data: MFData, - index: int - - """ - key_path_size = len(key_path) - for key, item in self.items(): - if key[:key_path_size] == key_path: - if key[-1] == key_leaf: - # found key_leaf as a key in the dictionary - return item, None - if not isinstance(item, MFComment): - data_item_index = 0 - data_item_structures = item.structure.data_item_structures - for data_item_struct in data_item_structures: - if data_item_struct.name == key_leaf: - # found key_leaf as a data item name in the data - # in the dictionary - return item, data_item_index - if data_item_struct.type != DatumType.keyword: - data_item_index += 1 - return None, None - - def output_keys(self, print_keys=True): - """ - Return a list of output data keys supported by the dictionary. - - Parameters - ---------- - print_keys : bool - print keys to console - - Returns - ------- - output keys : list - - """ - # get keys to request binary output - x = binaryfile_utils.MFOutputRequester.getkeys( - self, self._path, print_keys=print_keys - ) - return [key for key in x.dataDict] - - def input_keys(self): - """ - Return a list of input data keys. - - Returns - ------- - input keys : list - - """ - # get keys to request input ie. package data - for key in self: - print(key) - - def observation_keys(self): - """ - Return a list of observation keys. - - Returns - ------- - observation keys : list - - """ - # get keys to request observation file output - mfobservation.MFObservationRequester.getkeys(self, self._path) - - def keys(self): - """ - Return a list of all keys. - - Returns - ------- - all keys : list - - """ - # overrides the built in keys to print all keys, input and output - self.input_keys() - try: - self.output_keys() - except OSError as e: - if e.errno == errno.EEXIST: - pass - try: - self.observation_keys() - except KeyError: - pass - - -class MFSimulationData: +class MFSimulation(mfsimbase.MFSimulationBase): """ - Class containing MODFLOW simulation data and file formatting data. Use - MFSimulationData to set simulation-wide settings which include data - formatting and file location settings. - - Parameters - ---------- - path : str - path on disk to the simulation - - Attributes - ---------- - indent_string : str - String used to define how much indent to use (file formatting) - internal_formatting : list - List defining string to use for internal formatting - external_formatting : list - List defining string to use for external formatting - open_close_formatting : list - List defining string to use for open/close - max_columns_of_data : int - Maximum columns of data before line wraps. For structured grids this - is set to ncol by default. For all other grids the default is 20. - wrap_multidim_arrays : bool - Whether to wrap line for multi-dimensional arrays at the end of a - row/column/layer - _float_precision : int - Number of decimal points to write for a floating point number - _float_characters : int - Number of characters a floating point number takes up - write_headers: bool - When true flopy writes a header to each package file indicating that - it was created by flopy - sci_note_upper_thres : float - Numbers greater than this threshold are written in scientific notation - sci_note_lower_thres : float - Numbers less than this threshold are written in scientific notation - mfpath : MFFileMgmt - File path location information for the simulation - model_dimensions : dict - Dictionary containing discretization information for each model - mfdata : SimulationDict - Custom dictionary containing all model data for the simulation - - """ - - def __init__(self, path: Union[str, os.PathLike], mfsim): - # --- formatting variables --- - self.indent_string = " " - self.constant_formatting = ["constant", ""] - self._max_columns_of_data = 20 - self.wrap_multidim_arrays = True - self._float_precision = 8 - self._float_characters = 15 - self.write_headers = True - self._sci_note_upper_thres = 100000 - self._sci_note_lower_thres = 0.001 - self.fast_write = True - self.comments_on = False - self.auto_set_sizes = True - self.verify_data = True - self.debug = False - self.verbose = True - self.verbosity_level = VerbosityLevel.normal - self.max_columns_user_set = False - self.max_columns_auto_set = False - - self._update_str_format() - - # --- file path --- - self.mfpath = MFFileMgmt(path, mfsim) - - # --- ease of use variables to make working with modflow input and - # output data easier --- model dimension class for each model - self.model_dimensions = {} - - # --- model data --- - self.mfdata = SimulationDict(self.mfpath) - - # --- temporary variables --- - # other external files referenced - self.referenced_files = {} - - @property - def lazy_io(self): - if not self.auto_set_sizes and not self.verify_data: - return True - return False - - @lazy_io.setter - def lazy_io(self, val): - if val: - self.auto_set_sizes = False - self.verify_data = False - else: - self.auto_set_sizes = True - self.verify_data = True - - @property - def max_columns_of_data(self): - return self._max_columns_of_data - - @max_columns_of_data.setter - def max_columns_of_data(self, val): - if not self.max_columns_user_set and ( - not self.max_columns_auto_set or val > self._max_columns_of_data - ): - self._max_columns_of_data = val - self.max_columns_user_set = True - - @property - def float_precision(self): - """ - Gets precision of floating point numbers. - """ - return self._float_precision - - @float_precision.setter - def float_precision(self, value): - """ - Sets precision of floating point numbers. - - Parameters - ---------- - value: float - floating point precision - - """ - self._float_precision = value - self._update_str_format() - - @property - def float_characters(self): - """ - Gets max characters used in floating point numbers. - """ - return self._float_characters - - @float_characters.setter - def float_characters(self, value): - """ - Sets max characters used in floating point numbers. - - Parameters - ---------- - value: float - floating point max characters - - """ - self._float_characters = value - self._update_str_format() - - def set_sci_note_upper_thres(self, value): - """ - Sets threshold number where any number larger than threshold - is represented in scientific notation. - - Parameters - ---------- - value: float - threshold value - - """ - self._sci_note_upper_thres = value - self._update_str_format() - - def set_sci_note_lower_thres(self, value): - """ - Sets threshold number where any number smaller than threshold - is represented in scientific notation. - - Parameters - ---------- - value: float - threshold value - - """ - self._sci_note_lower_thres = value - self._update_str_format() - - def _update_str_format(self): - """ - Update floating point formatting strings.""" - self.reg_format_str = f"{{:.{self._float_precision}E}}" - self.sci_format_str = ( - f"{{:{self._float_characters}.{self._float_precision}f}}" - ) - - -class MFSimulation(PackageContainer): - """ - Entry point into any MODFLOW simulation. - MFSimulation is used to load, build, and/or save a MODFLOW 6 simulation. A MFSimulation object must be created before creating any of the MODFLOW 6 model objects. @@ -391,56 +16,60 @@ class MFSimulation(PackageContainer): Parameters ---------- sim_name : str - Name of the simulation. - version : str - Version of MODFLOW 6 executable - exe_name : str - Path to MODFLOW 6 executable - sim_ws : str - Path to MODFLOW 6 simulation working folder. This is the folder - containing the simulation name file. - verbosity_level : int - Verbosity level of standard output from 0 to 2. When 0 is specified no - standard output is written. When 1 is specified standard - error/warning messages with some informational messages are written. - When 2 is specified full error/warning/informational messages are - written (this is ideal for debugging). - continue_ : bool - Sets the continue option in the simulation name file. The continue - option is a keyword flag to indicate that the simulation should - continue even if one or more solutions do not converge. - nocheck : bool - Sets the nocheck option in the simulation name file. The nocheck - option is a keyword flag to indicate that the model input check - routines should not be called prior to each time step. Checks - are performed by default. - memory_print_option : str - Sets memory_print_option in the simulation name file. - Memory_print_option is a flag that controls printing of detailed - memory manager usage to the end of the simulation list file. NONE - means do not print detailed information. SUMMARY means print only - the total memory for each simulation component. ALL means print - information for each variable stored in the memory manager. NONE is - default if memory_print_option is not specified. - write_headers: bool - When true flopy writes a header to each package file indicating that - it was created by flopy. - lazy_io: bool - When true flopy only reads external data when the data is requested - and only writes external data if the data has changed. This option - automatically overrides the verify_data and auto_set_sizes, turning - both off. - Examples - -------- - >>> s = MFSimulation.load('my simulation', 'simulation.nam') - - Attributes - ---------- - sim_name : str - Name of the simulation - name_file : MFPackage - Simulation name file package - + Name of the simulation + continue_ : boolean + * continue (boolean) keyword flag to indicate that the simulation + should continue even if one or more solutions do not converge. + nocheck : boolean + * nocheck (boolean) keyword flag to indicate that the model input check + routines should not be called prior to each time step. Checks are + performed by default. + memory_print_option : string + * memory_print_option (string) is a flag that controls printing of + detailed memory manager usage to the end of the simulation list file. + NONE means do not print detailed information. SUMMARY means print + only the total memory for each simulation component. ALL means print + information for each variable stored in the memory manager. NONE is + default if MEMORY_PRINT_OPTION is not specified. + maxerrors : integer + * maxerrors (integer) maximum number of errors that will be stored and + printed. + tdis6 : string + * tdis6 (string) is the name of the Temporal Discretization (TDIS) + Input File. + models : [mtype, mfname, mname] + * mtype (string) is the type of model to add to simulation. + * mfname (string) is the file name of the model name file. + * mname (string) is the user-assigned name of the model. The model name + cannot exceed 16 characters and must not have blanks within the name. + The model name is case insensitive; any lowercase letters are + converted and stored as upper case letters. + exchanges : [exgtype, exgfile, exgmnamea, exgmnameb] + * exgtype (string) is the exchange type. + * exgfile (string) is the input file for the exchange. + * exgmnamea (string) is the name of the first model that is part of + this exchange. + * exgmnameb (string) is the name of the second model that is part of + this exchange. + mxiter : integer + * mxiter (integer) is the maximum number of outer iterations for this + solution group. The default value is 1. If there is only one solution + in the solution group, then MXITER must be 1. + solutiongroup : [slntype, slnfname, slnmnames] + * slntype (string) is the type of solution. The Integrated Model + Solution (IMS6) is the only supported option in this version. + * slnfname (string) name of file containing solution input. + * slnmnames (string) is the array of model names to add to this + solution. The number of model names is determined by the number of + model names the user provides on this line. + + Methods + ------- + load : (sim_name : str, version : string, + exe_name : str or PathLike, sim_ws : str or PathLike, strict : bool, + verbosity_level : int, load_only : list, verify_data : bool, + write_headers : bool, lazy_io : bool) : MFSimulation + a class method that loads a simulation from files """ def __init__( @@ -450,210 +79,32 @@ def __init__( exe_name: Union[str, os.PathLike] = "mf6", sim_ws: Union[str, os.PathLike] = os.curdir, verbosity_level=1, + write_headers=True, + lazy_io=False, continue_=None, nocheck=None, memory_print_option=None, - write_headers=True, - lazy_io=False, + maxerrors=None, ): - super().__init__(MFSimulationData(sim_ws, self), sim_name) - self.simulation_data.verbosity_level = self._resolve_verbosity_level( - verbosity_level - ) - self.simulation_data.write_headers = write_headers - if lazy_io: - self.simulation_data.lazy_io = True - - # verify metadata - fpdata = mfstructure.MFStructure() - if not fpdata.valid: - excpt_str = ( - "Invalid package metadata. Unable to load MODFLOW " - "file structure metadata." - ) - raise FlopyException(excpt_str) - - # initialize - self.dimensions = None - self.type = "Simulation" - - self.version = version - self.exe_name = exe_name - self._models = {} - self._tdis_file = None - self._exchange_files = {} - self._solution_files = {} - self._other_files = {} - self.structure = fpdata.sim_struct - self.model_type = None - - self._exg_file_num = {} - - self.simulation_data.mfpath.set_last_accessed_path() - - # build simulation name file - self.name_file = mfnam.ModflowNam( - self, - filename="mfsim.nam", - continue_=continue_, - nocheck=nocheck, - memory_print_option=memory_print_option, - _internal_package=True, - ) - - # try to build directory structure - sim_path = self.simulation_data.mfpath.get_sim_path() - if not os.path.isdir(sim_path): - try: - os.makedirs(sim_path) - except OSError as e: - if ( - self.simulation_data.verbosity_level.value - >= VerbosityLevel.quiet.value - ): - print( - "An error occurred when trying to create the " - "directory {}: {}".format(sim_path, e.strerror) - ) - - # set simulation validity initially to false since the user must first - # add at least one model to the simulation and fill out the name and - # tdis files - self.valid = False - - def __getattr__(self, item): - """ - Override __getattr__ to allow retrieving models. - - __getattr__ is used to allow for getting models and packages as if - they are attributes - - Parameters - ---------- - item : str - model or package name - - - Returns - ------- - md : Model or package object - Model or package object of type :class:flopy6.mfmodel or - :class:flopy6.mfpackage - - """ - if item == "valid" or not hasattr(self, "valid"): - raise AttributeError(item) - - models = [] - if item in self.structure.model_types: - # get all models of this type - for model in self._models.values(): - if model.model_type == item or model.model_type[:-1] == item: - models.append(model) - - if len(models) > 0: - return models - elif item in self._models: - model = self.get_model(item) - if model is not None: - return model - raise AttributeError(item) - else: - package = self.get_package(item) - if package is not None: - return package - raise AttributeError(item) - - def __repr__(self): - """ - Override __repr__ to print custom string. - - Returns - -------- - repr string : str - string describing object - - """ - return self._get_data_str(True) - - def __str__(self): - """ - Override __str__ to print custom string. - - Returns - -------- - str string : str - string describing object - - """ - return self._get_data_str(False) - - def _get_data_str(self, formal): - file_mgt = self.simulation_data.mfpath - data_str = ( - "sim_name = {}\nsim_path = {}\nexe_name = " - "{}\n" - "\n".format(self.name, file_mgt.get_sim_path(), self.exe_name) + super().__init__( + sim_name=sim_name, + version=version, + exe_name=exe_name, + sim_ws=sim_ws, + verbosity_level=verbosity_level, + write_headers=write_headers, + lazy_io=lazy_io, ) - for package in self._packagelist: - pk_str = package._get_data_str(formal, False) - if formal: - if len(pk_str.strip()) > 0: - data_str = ( - "{}###################\nPackage {}\n" - "###################\n\n" - "{}\n".format(data_str, package._get_pname(), pk_str) - ) - else: - if len(pk_str.strip()) > 0: - data_str = ( - "{}###################\nPackage {}\n" - "###################\n\n" - "{}\n".format(data_str, package._get_pname(), pk_str) - ) - for model in self._models.values(): - if formal: - mod_repr = repr(model) - if len(mod_repr.strip()) > 0: - data_str = ( - "{}@@@@@@@@@@@@@@@@@@@@\nModel {}\n" - "@@@@@@@@@@@@@@@@@@@@\n\n" - "{}\n".format(data_str, model.name, mod_repr) - ) - else: - mod_str = str(model) - if len(mod_str.strip()) > 0: - data_str = ( - "{}@@@@@@@@@@@@@@@@@@@@\nModel {}\n" - "@@@@@@@@@@@@@@@@@@@@\n\n" - "{}\n".format(data_str, model.name, mod_str) - ) - return data_str - - @property - def model_names(self): - """ - Return a list of model names associated with this simulation. - - Returns - -------- - list: list of model names - - """ - return list(self._models.keys()) - - @property - def exchange_files(self): - """ - Return list of exchange files associated with this simulation. + self.name_file.continue_.set_data(continue_) + self.name_file.nocheck.set_data(nocheck) + self.name_file.memory_print_option.set_data(memory_print_option) + self.name_file.maxerrors.set_data(maxerrors) - Returns - -------- - list: list of exchange names - - """ - return self._exchange_files.values() + self.continue_ = self.name_file.continue_ + self.nocheck = self.name_file.nocheck + self.memory_print_option = self.name_file.memory_print_option + self.maxerrors = self.name_file.maxerrors @classmethod def load( @@ -669,1841 +120,16 @@ def load( write_headers=True, lazy_io=False, ): - """ - Load an existing model. - - Parameters - ---------- - sim_name : str - Name of the simulation. - version : str - MODFLOW version - exe_name : str or PathLike - Path to MODFLOW executable (relative to the simulation workspace or absolute) - sim_ws : str or PathLike - Path to simulation workspace - strict : bool - Strict enforcement of file formatting - verbosity_level : int - Verbosity level of standard output - 0: No standard output - 1: Standard error/warning messages with some informational - messages - 2: Verbose mode with full error/warning/informational - messages. This is ideal for debugging. - load_only : list - List of package abbreviations or package names corresponding to - packages that flopy will load. default is None, which loads all - packages. the discretization packages will load regardless of this - setting. subpackages, like time series and observations, will also - load regardless of this setting. - example list: ['ic', 'maw', 'npf', 'oc', 'ims', 'gwf6-gwf6'] - verify_data : bool - Verify data when it is loaded. this can slow down loading - write_headers: bool - When true flopy writes a header to each package file indicating - that it was created by flopy - lazy_io: bool - When true flopy only reads external data when the data is requested - and only writes external data if the data has changed. This option - automatically overrides the verify_data and auto_set_sizes, turning - both off. - Returns - ------- - sim : MFSimulation object - - Examples - -------- - >>> s = flopy.mf6.mfsimulation.load('my simulation') - - """ - # initialize - instance = cls( + return mfsimbase.MFSimulationBase.load( + cls, sim_name, version, exe_name, sim_ws, + strict, verbosity_level, - write_headers=write_headers, - ) - verbosity_level = instance.simulation_data.verbosity_level - - instance.simulation_data.verify_data = verify_data - if lazy_io: - instance.simulation_data.lazy_io = True - - if verbosity_level.value >= VerbosityLevel.normal.value: - print("loading simulation...") - - # build case consistent load_only dictionary for quick lookups - load_only = instance._load_only_dict(load_only) - - # load simulation name file - if verbosity_level.value >= VerbosityLevel.normal.value: - print(" loading simulation name file...") - instance.name_file.load(strict) - - # load TDIS file - tdis_pkg = f"tdis{mfstructure.MFStructure().get_version_string()}" - tdis_attr = getattr(instance.name_file, tdis_pkg) - instance._tdis_file = mftdis.ModflowTdis( - instance, filename=tdis_attr.get_data() - ) - - instance._tdis_file._filename = instance.simulation_data.mfdata[ - ("nam", "timing", tdis_pkg) - ].get_data() - if verbosity_level.value >= VerbosityLevel.normal.value: - print(" loading tdis package...") - instance._tdis_file.load(strict) - - # load models - try: - model_recarray = instance.simulation_data.mfdata[ - ("nam", "models", "models") - ] - models = model_recarray.get_data() - except MFDataException as mfde: - message = ( - "Error occurred while loading model names from the " - "simulation name file." - ) - raise MFDataException( - mfdata_except=mfde, - model=instance.name, - package="nam", - message=message, - ) - for item in models: - # resolve model working folder and name file - path, name_file = os.path.split(item[1]) - model_obj = PackageContainer.model_factory(item[0][:-1].lower()) - # load model - if verbosity_level.value >= VerbosityLevel.normal.value: - print(f" loading model {item[0].lower()}...") - instance._models[item[2]] = model_obj.load( - instance, - instance.structure.model_struct_objs[item[0].lower()], - item[2], - name_file, - version, - exe_name, - strict, - path, - load_only, - ) - - # load exchange packages and dependent packages - try: - exchange_recarray = instance.name_file.exchanges - has_exch_data = exchange_recarray.has_data() - except MFDataException as mfde: - message = ( - "Error occurred while loading exchange names from the " - "simulation name file." - ) - raise MFDataException( - mfdata_except=mfde, - model=instance.name, - package="nam", - message=message, - ) - if has_exch_data: - try: - exch_data = exchange_recarray.get_data() - except MFDataException as mfde: - message = ( - "Error occurred while loading exchange names from " - "the simulation name file." - ) - raise MFDataException( - mfdata_except=mfde, - model=instance.name, - package="nam", - message=message, - ) - for exgfile in exch_data: - if load_only is not None and not instance._in_pkg_list( - load_only, exgfile[0], exgfile[2] - ): - if ( - instance.simulation_data.verbosity_level.value - >= VerbosityLevel.normal.value - ): - print(f" skipping package {exgfile[0].lower()}...") - continue - # get exchange type by removing numbers from exgtype - exchange_type = "".join( - [char for char in exgfile[0] if not char.isdigit()] - ).upper() - # get exchange number for this type - if exchange_type not in instance._exg_file_num: - exchange_file_num = 0 - instance._exg_file_num[exchange_type] = 1 - else: - exchange_file_num = instance._exg_file_num[exchange_type] - instance._exg_file_num[exchange_type] += 1 - - exchange_name = f"{exchange_type}_EXG_{exchange_file_num}" - # find package class the corresponds to this exchange type - package_obj = instance.package_factory( - exchange_type.replace("-", "").lower(), "" - ) - if not package_obj: - message = ( - "An error occurred while loading the " - "simulation name file. Invalid exchange type " - '"{}" specified.'.format(exchange_type) - ) - type_, value_, traceback_ = sys.exc_info() - raise MFDataException( - instance.name, - "nam", - "nam", - "loading simulation name file", - exchange_recarray.structure.name, - inspect.stack()[0][3], - type_, - value_, - traceback_, - message, - instance._simulation_data.debug, - ) - - # build and load exchange package object - exchange_file = package_obj( - instance, - exgtype=exgfile[0], - exgmnamea=exgfile[2], - exgmnameb=exgfile[3], - filename=exgfile[1], - pname=exchange_name, - loading_package=True, - ) - if verbosity_level.value >= VerbosityLevel.normal.value: - print( - f" loading exchange package {exchange_file._get_pname()}..." - ) - exchange_file.load(strict) - instance._exchange_files[exgfile[1]] = exchange_file - - # load simulation packages - solution_recarray = instance.simulation_data.mfdata[ - ("nam", "solutiongroup", "solutiongroup") - ] - - try: - solution_group_dict = solution_recarray.get_data() - except MFDataException as mfde: - message = ( - "Error occurred while loading solution groups from " - "the simulation name file." - ) - raise MFDataException( - mfdata_except=mfde, - model=instance.name, - package="nam", - message=message, - ) - for solution_group in solution_group_dict.values(): - for solution_info in solution_group: - if load_only is not None and not instance._in_pkg_list( - load_only, solution_info[0], solution_info[2] - ): - if ( - instance.simulation_data.verbosity_level.value - >= VerbosityLevel.normal.value - ): - print( - f" skipping package {solution_info[0].lower()}..." - ) - continue - # create solution package - sln_package_obj = instance.package_factory( - solution_info[0][:-1].lower(), "" - ) - sln_package = sln_package_obj( - instance, - filename=solution_info[1], - pname=solution_info[2], - ) - - if verbosity_level.value >= VerbosityLevel.normal.value: - print( - f" loading solution package " - f"{sln_package._get_pname()}..." - ) - sln_package.load(strict) - - instance.simulation_data.mfpath.set_last_accessed_path() - if verify_data: - instance.check() - return instance - - def check( - self, - f: Optional[Union[str, os.PathLike]] = None, - verbose=True, - level=1, - ): - """ - Check model data for common errors. - - Parameters - ---------- - f : str or PathLike, optional - String defining file name or file handle for summary file - of check method output. If str or pathlike, a file handle - is created. If None, the method does not write results to - a summary file. (default is None) - verbose : bool - Boolean flag used to determine if check method results are - written to the screen - level : int - Check method analysis level. If level=0, summary checks are - performed. If level=1, full checks are performed. - - Returns - ------- - check list: list - Python list containing simulation check results - - Examples - -------- - - >>> import flopy - >>> m = flopy.modflow.Modflow.load('model.nam') - >>> m.check() - """ - # check instance for simulation-level check - chk_list = [] - - # check models - for model in self._models.values(): - print(f'Checking model "{model.name}"...') - chk_list.append(model.check(f, verbose, level)) - - print("Checking for missing simulation packages...") - if self._tdis_file is None: - if chk_list: - chk_list[0]._add_to_summary( - "Error", desc="\r No tdis package", package="model" - ) - print("Error: no tdis package") - if len(self._solution_files) == 0: - if chk_list: - chk_list[0]._add_to_summary( - "Error", desc="\r No solver package", package="model" - ) - print("Error: no solution package") - return chk_list - - @property - def sim_path(self) -> Path: - return Path(self.simulation_data.mfpath.get_sim_path()) - - @property - def sim_package_list(self): - """List of all "simulation level" packages""" - package_list = [] - if self._tdis_file is not None: - package_list.append(self._tdis_file) - for sim_package in self._solution_files.values(): - package_list.append(sim_package) - for sim_package in self._exchange_files.values(): - package_list.append(sim_package) - for sim_package in self._other_files.values(): - package_list.append(sim_package) - return package_list - - def load_package( - self, - ftype, - fname: Union[str, os.PathLike], - pname, - strict, - ref_path: Union[str, os.PathLike], - dict_package_name=None, - parent_package=None, - ): - """ - Load a package from a file. - - Parameters - ---------- - ftype : str - the file type - fname : str or PathLike - the path of the file containing the package input - pname : str - the user-defined name for the package - strict : bool - strict mode when loading the file - ref_path : str - path to the file. uses local path if set to None - dict_package_name : str - package name for dictionary lookup - parent_package : MFPackage - parent package - - """ - if ( - ftype in self.structure.package_struct_objs - and self.structure.package_struct_objs[ftype].multi_package_support - ) or ( - ftype in self.structure.utl_struct_objs - and self.structure.utl_struct_objs[ftype].multi_package_support - ): - # resolve dictionary name for package - if dict_package_name is not None: - if parent_package is not None: - dict_package_name = f"{parent_package.path[-1]}_{ftype}" - else: - # use dict_package_name as the base name - if ftype in self._ftype_num_dict: - self._ftype_num_dict[dict_package_name] += 1 - else: - self._ftype_num_dict[dict_package_name] = 0 - dict_package_name = "{}_{}".format( - dict_package_name, - self._ftype_num_dict[dict_package_name], - ) - else: - # use ftype as the base name - if ftype in self._ftype_num_dict: - self._ftype_num_dict[ftype] += 1 - else: - self._ftype_num_dict[ftype] = 0 - if pname is not None: - dict_package_name = pname - else: - dict_package_name = ( - f"{ftype}_{self._ftype_num_dict[ftype]}" - ) - else: - dict_package_name = ftype - - # get package object from file type - package_obj = self.package_factory(ftype, "") - # create package - package = package_obj( - self, - filename=fname, - pname=dict_package_name, - parent_file=parent_package, - loading_package=True, - ) - package.load(strict) - self._other_files[package.filename] = package - # register child package with the simulation - self._add_package(package, package.path) - if parent_package is not None: - # register child package with the parent package - parent_package._add_package(package, package.path) - return package - - def register_ims_package( - self, solution_file: MFPackage, model_list: Union[str, List[str]] - ): - self.register_solution_package(solution_file, model_list) - - def register_solution_package( - self, solution_file: MFPackage, model_list: Union[str, List[str]] - ): - """ - Register a solution package with the simulation. - - Parameters - solution_file : MFPackage - solution package to register - model_list : list of strings - list of models using the solution package to be registered - - """ - if isinstance(model_list, str): - model_list = [model_list] - - if ( - solution_file.package_type - not in mfstructure.MFStructure().flopy_dict["solution_packages"] - ): - comment = ( - 'Parameter "solution_file" is not a valid solution file. ' - 'Expected solution file, but type "{}" was given' - ".".format(type(solution_file)) - ) - type_, value_, traceback_ = sys.exc_info() - raise MFDataException( - None, - "solution", - "", - "registering solution package", - "", - inspect.stack()[0][3], - type_, - value_, - traceback_, - comment, - self.simulation_data.debug, - ) - valid_model_types = mfstructure.MFStructure().flopy_dict[ - "solution_packages" - ][solution_file.package_type] - # remove models from existing solution groups - if model_list is not None: - for model in model_list: - md = self.get_model(model) - if md is not None and ( - md.model_type not in valid_model_types - and "*" not in valid_model_types - ): - comment = ( - f"Model type {md.model_type} is not a valid type " - f"for solution file {solution_file.filename} solution " - f"file type {solution_file.package_type}. Valid model " - f"types are {valid_model_types}" - ) - type_, value_, traceback_ = sys.exc_info() - raise MFDataException( - None, - "solution", - "", - "registering solution package", - "", - inspect.stack()[0][3], - type_, - value_, - traceback_, - comment, - self.simulation_data.debug, - ) - self._remove_from_all_solution_groups(model) - - # register solution package with model list - in_simulation = False - pkg_with_same_name = None - for file in self._solution_files.values(): - if file is solution_file: - in_simulation = True - if ( - file.package_name == solution_file.package_name - and file != solution_file - ): - pkg_with_same_name = file - if ( - self.simulation_data.verbosity_level.value - >= VerbosityLevel.normal.value - ): - print( - "WARNING: solution package with name {} already exists. " - "New solution package will replace old package" - ".".format(file.package_name) - ) - self._remove_package(self._solution_files[file.filename]) - del self._solution_files[file.filename] - break - # register solution package - if not in_simulation: - self._add_package( - solution_file, self._get_package_path(solution_file) - ) - # do not allow a solution package to be registered twice with the - # same simulation - if not in_simulation: - # create unique file/package name - if solution_file.package_name is None: - file_num = len(self._solution_files) - 1 - solution_file.package_name = ( - f"{solution_file.package_type}_{file_num}" - ) - if solution_file.filename in self._solution_files: - solution_file.filename = MFFileMgmt.unique_file_name( - solution_file.filename, self._solution_files - ) - # add solution package to simulation - self._solution_files[solution_file.filename] = solution_file - - # If solution file is being replaced, replace solution filename in - # solution group - if pkg_with_same_name is not None and self._is_in_solution_group( - pkg_with_same_name.filename, 1 - ): - # change existing solution group to reflect new solution file - self._replace_solution_in_solution_group( - pkg_with_same_name.filename, 1, solution_file.filename - ) - # only allow solution package to be registered to one solution group - elif model_list is not None: - sln_file_in_group = self._is_in_solution_group( - solution_file.filename, 1 - ) - # add solution group to the simulation name file - solution_recarray = self.name_file.solutiongroup - solution_group_list = solution_recarray.get_active_key_list() - if len(solution_group_list) == 0: - solution_group_num = 0 - else: - solution_group_num = solution_group_list[-1][0] - - if sln_file_in_group: - self._append_to_solution_group( - solution_file.filename, model_list - ) - else: - if self.name_file.mxiter.get_data(solution_group_num) is None: - self.name_file.mxiter.add_transient_key(solution_group_num) - - # associate any models in the model list to this - # simulation file - version_string = mfstructure.MFStructure().get_version_string() - solution_pkg = f"{solution_file.package_abbr}{version_string}" - new_record = [solution_pkg, solution_file.filename] - for model in model_list: - new_record.append(model) - try: - solution_recarray.append_list_as_record( - new_record, solution_group_num - ) - except MFDataException as mfde: - message = ( - "Error occurred while updating the " - "simulation name file with the solution package " - 'file "{}".'.format(solution_file.filename) - ) - raise MFDataException( - mfdata_except=mfde, package="nam", message=message - ) - - @staticmethod - def _rename_package_group(group_dict, name): - package_type_count = {} - # first build an array to avoid key modification errors - package_array = [] - for package in group_dict.values(): - package_array.append(package) - # update package file names and count - for package in package_array: - if package.package_type not in package_type_count: - file_name = f"{name}.{package.package_type}" - package_type_count[package.package_type] = 1 - else: - package_type_count[package.package_type] += 1 - ptc = package_type_count[package.package_type] - file_name = f"{name}_{ptc}.{package.package_type}" - base_filepath = os.path.split(package.filename)[0] - if base_filepath != "": - # include existing relative path in new file name - file_name = os.path.join(base_filepath, file_name) - package.filename = file_name - - def _rename_exchange_file(self, package, new_filename): - self._exchange_files[package.filename] = package - try: - exchange_recarray_data = self.name_file.exchanges.get_data() - except MFDataException as mfde: - message = ( - "An error occurred while retrieving exchange " - "data from the simulation name file. The error " - "occurred while registering exchange file " - f'"{package.filename}".' - ) - raise MFDataException( - mfdata_except=mfde, - package=package._get_pname(), - message=message, - ) - if exchange_recarray_data is not None: - for index, exchange in zip( - range(0, len(exchange_recarray_data)), - exchange_recarray_data, - ): - if exchange[1] == package.filename: - # update existing exchange - exchange_recarray_data[index][1] = new_filename - ex_recarray = self.name_file.exchanges - try: - ex_recarray.set_data(exchange_recarray_data) - except MFDataException as mfde: - message = ( - "An error occurred while setting " - "exchange data in the simulation name " - "file. The error occurred while " - "registering the following " - "values (exgtype, filename, " - f'exgmnamea, exgmnameb): "{package.exgtype} ' - f"{package.filename} {package.exgmnamea}" - f'{package.exgmnameb}".' - ) - raise MFDataException( - mfdata_except=mfde, - package=package._get_pname(), - message=message, - ) - return - - def _set_timing_block(self, file_name): - struct_root = mfstructure.MFStructure() - tdis_pkg = f"tdis{struct_root.get_version_string()}" - tdis_attr = getattr(self.name_file, tdis_pkg) - try: - tdis_attr.set_data(file_name) - except MFDataException as mfde: - message = ( - "An error occurred while setting the tdis package " - f'file name "{file_name}". The error occurred while ' - "registering the tdis package with the " - "simulation" - ) - raise MFDataException( - mfdata_except=mfde, - package=file_name, - message=message, - ) - - def update_package_filename(self, package, new_name): - """ - Updates internal arrays to be consistent with a new file name. - This is for internal flopy library use only. - - Parameters - ---------- - package: MFPackage - Package with new name - new_name: str - Package's new name - - """ - if ( - self._tdis_file is not None - and package.filename == self._tdis_file.filename - ): - self._set_timing_block(new_name) - elif package.filename in self._exchange_files: - self._exchange_files[new_name] = self._exchange_files.pop( - package.filename - ) - self._rename_exchange_file(package, new_name) - elif package.filename in self._solution_files: - self._solution_files[new_name] = self._solution_files.pop( - package.filename - ) - self._update_solution_group(package.filename, new_name) - else: - self._other_files[new_name] = self._other_files.pop( - package.filename - ) - - def rename_all_packages(self, name): - """ - Rename all packages with name as prefix. - - Parameters - ---------- - name: str - Prefix of package names - - """ - if self._tdis_file is not None: - self._tdis_file.filename = f"{name}.{self._tdis_file.package_type}" - - self._rename_package_group(self._exchange_files, name) - self._rename_package_group(self._solution_files, name) - self._rename_package_group(self._other_files, name) - for model in self._models.values(): - model.rename_all_packages(name) - - def set_all_data_external( - self, check_data=True, external_data_folder=None - ): - """Sets the simulation's list and array data to be stored externally. - - Parameters - ---------- - check_data: bool - Determines if data error checking is enabled during this - process. Data error checking can be slow on large datasets. - external_data_folder: str or PathLike - Path relative to the simulation path or model relative path - (see use_model_relative_path parameter), where external data - will be stored - """ - # copy any files whose paths have changed - self.simulation_data.mfpath.copy_files() - # set data external for all packages in all models - for model in self._models.values(): - model.set_all_data_external(check_data, external_data_folder) - # set data external for solution packages - for package in self._solution_files.values(): - package.set_all_data_external(check_data, external_data_folder) - # set data external for other packages - for package in self._other_files.values(): - package.set_all_data_external(check_data, external_data_folder) - for package in self._exchange_files.values(): - package.set_all_data_external(check_data, external_data_folder) - - def set_all_data_internal(self, check_data=True): - # set data external for all packages in all models - for model in self._models.values(): - model.set_all_data_internal(check_data) - # set data external for solution packages - for package in self._solution_files.values(): - package.set_all_data_internal(check_data) - # set data external for other packages - for package in self._other_files.values(): - package.set_all_data_internal(check_data) - # set data external for exchange packages - for package in self._exchange_files.values(): - package.set_all_data_internal(check_data) - - def write_simulation( - self, ext_file_action=ExtFileAction.copy_relative_paths, silent=False - ): - """ - Write the simulation to files. - - Parameters - ext_file_action : ExtFileAction - Defines what to do with external files when the simulation - path has changed. Defaults to copy_relative_paths which - copies only files with relative paths, leaving files defined - by absolute paths fixed. - silent : bool - Writes out the simulation in silent mode (verbosity_level = 0) - - """ - sim_data = self.simulation_data - if not sim_data.max_columns_user_set: - # search for dis packages - for model in self._models.values(): - dis = model.get_package("dis") - if dis is not None and hasattr(dis, "ncol"): - sim_data.max_columns_of_data = dis.ncol.get_data() - sim_data.max_columns_user_set = False - sim_data.max_columns_auto_set = True - - saved_verb_lvl = self.simulation_data.verbosity_level - if silent: - self.simulation_data.verbosity_level = VerbosityLevel.quiet - - # write simulation name file - if ( - self.simulation_data.verbosity_level.value - >= VerbosityLevel.normal.value - ): - print("writing simulation...") - print(" writing simulation name file...") - self.name_file.write(ext_file_action=ext_file_action) - - # write TDIS file - if ( - self.simulation_data.verbosity_level.value - >= VerbosityLevel.normal.value - ): - print(" writing simulation tdis package...") - self._tdis_file.write(ext_file_action=ext_file_action) - - # write solution files - for solution_file in self._solution_files.values(): - if ( - self.simulation_data.verbosity_level.value - >= VerbosityLevel.normal.value - ): - print( - f" writing solution package " - f"{solution_file._get_pname()}..." - ) - solution_file.write(ext_file_action=ext_file_action) - - # write exchange files - for exchange_file in self._exchange_files.values(): - exchange_file.write() - - # write other packages - for pp in self._other_files.values(): - if ( - self.simulation_data.verbosity_level.value - >= VerbosityLevel.normal.value - ): - print(f" writing package {pp._get_pname()}...") - pp.write(ext_file_action=ext_file_action) - - # FIX: model working folder should be model name file folder - - # write models - for model in self._models.values(): - if ( - self.simulation_data.verbosity_level.value - >= VerbosityLevel.normal.value - ): - print(f" writing model {model.name}...") - model.write(ext_file_action=ext_file_action) - - self.simulation_data.mfpath.set_last_accessed_path() - - if silent: - self.simulation_data.verbosity_level = saved_verb_lvl - - def set_sim_path(self, path: Union[str, os.PathLike]): - """Return a list of output data keys. - - Parameters - ---------- - path : str - Relative or absolute path to simulation root folder. - - """ - # set all data internal - self.set_all_data_internal() - - # set simulation path - self.simulation_data.mfpath.set_sim_path(path, True) - - if not os.path.exists(path): - # create new simulation folder - os.makedirs(path) - - def run_simulation( - self, - silent=None, - pause=False, - report=False, - processors=None, - normal_msg="normal termination", - use_async=False, - cargs=None, - ): - """ - Run the simulation. - - Parameters - ---------- - silent: bool - Run in silent mode - pause: bool - Pause at end of run - report: bool - Save stdout lines to a list (buff) - processors: int - Number of processors. Parallel simulations are only supported - for MODFLOW 6 simulations. (default is None) - normal_msg: str or list - Normal termination message used to determine if the run - terminated normally. More than one message can be provided - using a list. (default is 'normal termination') - use_async : bool - Asynchronously read model stdout and report with timestamps. - good for models that take long time to run. not good for - models that run really fast - cargs : str or list of strings - Additional command line arguments to pass to the executable. - default is None - - Returns - -------- - success : bool - buff : list of lines of stdout - - """ - if silent is None: - if ( - self.simulation_data.verbosity_level.value - >= VerbosityLevel.normal.value - ): - silent = False - else: - silent = True - return run_model( - self.exe_name, - None, - self.simulation_data.mfpath.get_sim_path(), - silent=silent, - pause=pause, - report=report, - processors=processors, - normal_msg=normal_msg, - use_async=use_async, - cargs=cargs, - ) - - def delete_output_files(self): - """Deletes simulation output files.""" - output_req = binaryfile_utils.MFOutputRequester - output_file_keys = output_req.getkeys( - self.simulation_data.mfdata, self.simulation_data.mfpath, False - ) - for path in output_file_keys.binarypathdict.values(): - if os.path.isfile(path): - os.remove(path) - - def remove_package(self, package_name): - """ - Removes package from the simulation. `package_name` can be the - package's name, type, or package object to be removed from the model. - - Parameters - ---------- - package_name : str - Name of package to be removed - - """ - if isinstance(package_name, MFPackage): - packages = [package_name] - else: - packages = self.get_package(package_name) - if not isinstance(packages, list): - packages = [packages] - for package in packages: - if ( - self._tdis_file is not None - and package.path == self._tdis_file.path - ): - self._tdis_file = None - if package.filename in self._exchange_files: - del self._exchange_files[package.filename] - if package.filename in self._solution_files: - del self._solution_files[package.filename] - self._update_solution_group(package.filename) - if package.filename in self._other_files: - del self._other_files[package.filename] - - self._remove_package(package) - - @property - def model_dict(self): - """ - Return a dictionary of models associated with this simulation. - - Returns - -------- - model dict : dict - dictionary of models - - """ - return self._models.copy() - - def get_model(self, model_name=None): - """ - Returns the models in the simulation with a given model name, name - file name, or model type. - - Parameters - ---------- - model_name : str - Name of the model to get. Passing in None or an empty list - will get the first model. - - Returns - -------- - model : MFModel - - """ - if len(self._models) == 0: - return None - - if model_name is None: - for model in self._models.values(): - return model - if model_name in self._models: - return self._models[model_name] - # do case-insensitive lookup - for name, model in self._models.items(): - if model_name.lower() == name.lower(): - return model - return None - - def get_exchange_file(self, filename): - """ - Get a specified exchange file. - - Parameters - ---------- - filename : str - Name of exchange file to get - - Returns - -------- - exchange package : MFPackage - - """ - if filename in self._exchange_files: - return self._exchange_files[filename] - else: - excpt_str = f'Exchange file "{filename}" can not be found.' - raise FlopyException(excpt_str) - - def get_file(self, filename): - """ - Get a specified file. - - Parameters - ---------- - filename : str - Name of mover file to get - - Returns - -------- - mover package : MFPackage - - """ - if filename in self._other_files: - return self._other_files[filename] - else: - excpt_str = f'file "{filename}" can not be found.' - raise FlopyException(excpt_str) - - def get_mvr_file(self, filename): - """ - Get a specified mover file. - - Parameters - ---------- - filename : str - Name of mover file to get - - Returns - -------- - mover package : MFPackage - - """ - warnings.warn( - "get_mvr_file will be deprecated and will be removed in version " - "3.3.6. Use get_file", - PendingDeprecationWarning, - ) - if filename in self._other_files: - return self._other_files[filename] - else: - excpt_str = f'MVR file "{filename}" can not be found.' - raise FlopyException(excpt_str) - - def get_mvt_file(self, filename): - """ - Get a specified mvt file. - - Parameters - ---------- - filename : str - Name of mover transport file to get - - Returns - -------- - mover transport package : MFPackage - - """ - warnings.warn( - "get_mvt_file will be deprecated and will be removed in version " - "3.3.6. Use get_file", - PendingDeprecationWarning, - ) - if filename in self._other_files: - return self._other_files[filename] - else: - excpt_str = f'MVT file "{filename}" can not be found.' - raise FlopyException(excpt_str) - - def get_gnc_file(self, filename): - """ - Get a specified gnc file. - - Parameters - ---------- - filename : str - Name of gnc file to get - - Returns - -------- - gnc package : MFPackage - - """ - warnings.warn( - "get_gnc_file will be deprecated and will be removed in version " - "3.3.6. Use get_file", - PendingDeprecationWarning, - ) - if filename in self._other_files: - return self._other_files[filename] - else: - excpt_str = f'GNC file "{filename}" can not be found.' - raise FlopyException(excpt_str) - - def remove_exchange_file(self, package): - """ - Removes the exchange file "package". This is for internal flopy - library use only. - - Parameters - ---------- - package: MFPackage - Exchange package to be removed - - """ - self._exchange_files[package.filename] = package - try: - exchange_recarray_data = self.name_file.exchanges.get_data() - except MFDataException as mfde: - message = ( - "An error occurred while retrieving exchange " - "data from the simulation name file. The error " - "occurred while registering exchange file " - f'"{package.filename}".' - ) - raise MFDataException( - mfdata_except=mfde, - package=package._get_pname(), - message=message, - ) - remove_indices = [] - if exchange_recarray_data is not None: - for index, exchange in zip( - range(0, len(exchange_recarray_data)), - exchange_recarray_data, - ): - if ( - package.filename is not None - and exchange[1] == package.filename - ): - remove_indices.append(index) - if len(remove_indices) > 0: - self.name_file.exchanges.set_data( - np.delete(exchange_recarray_data, remove_indices) - ) - - def register_exchange_file(self, package): - """ - Register an exchange package file with the simulation. This is a - call-back method made from the package and should not be called - directly. - - Parameters - ---------- - package : MFPackage - Exchange package object to register - - """ - if package.filename not in self._exchange_files: - exgtype = package.exgtype - exgmnamea = package.exgmnamea - exgmnameb = package.exgmnameb - - if exgtype is None or exgmnamea is None or exgmnameb is None: - excpt_str = ( - "Exchange packages require that exgtype, " - "exgmnamea, and exgmnameb are specified." - ) - raise FlopyException(excpt_str) - - self._exchange_files[package.filename] = package - try: - exchange_recarray_data = self.name_file.exchanges.get_data() - except MFDataException as mfde: - message = ( - "An error occurred while retrieving exchange " - "data from the simulation name file. The error " - "occurred while registering exchange file " - f'"{package.filename}".' - ) - raise MFDataException( - mfdata_except=mfde, - package=package._get_pname(), - message=message, - ) - if exchange_recarray_data is not None: - for index, exchange in zip( - range(0, len(exchange_recarray_data)), - exchange_recarray_data, - ): - if exchange[1] == package.filename: - # update existing exchange - exchange_recarray_data[index][0] = exgtype - exchange_recarray_data[index][2] = exgmnamea - exchange_recarray_data[index][3] = exgmnameb - ex_recarray = self.name_file.exchanges - try: - ex_recarray.set_data(exchange_recarray_data) - except MFDataException as mfde: - message = ( - "An error occurred while setting " - "exchange data in the simulation name " - "file. The error occurred while " - "registering the following " - "values (exgtype, filename, " - f'exgmnamea, exgmnameb): "{exgtype} ' - f"{package.filename} {exgmnamea}" - f'{exgmnameb}".' - ) - raise MFDataException( - mfdata_except=mfde, - package=package._get_pname(), - message=message, - ) - return - try: - # add new exchange - self.name_file.exchanges.append_data( - [(exgtype, package.filename, exgmnamea, exgmnameb)] - ) - except MFDataException as mfde: - message = ( - "An error occurred while setting exchange data " - "in the simulation name file. The error occurred " - "while registering the following values (exgtype, " - f'filename, exgmnamea, exgmnameb): "{exgtype} ' - f'{package.filename} {exgmnamea} {exgmnameb}".' - ) - raise MFDataException( - mfdata_except=mfde, - package=package._get_pname(), - message=message, - ) - if ( - package.dimensions is None - or package.dimensions.model_dim is None - ): - # resolve exchange package dimensions object - package.dimensions = package.create_package_dimensions() - - def _remove_package_by_type(self, package): - pname = None - if package.package_name is not None: - pname = package.package_name.lower() - if ( - package.package_type.lower() == "tdis" - and self._tdis_file is not None - and self._tdis_file in self._packagelist - ): - # tdis package already exists. there can be only one tdis - # package. remove existing tdis package - if ( - self.simulation_data.verbosity_level.value - >= VerbosityLevel.normal.value - ): - print( - "WARNING: tdis package already exists. Replacing " - "existing tdis package." - ) - self._remove_package(self._tdis_file) - elif ( - package.package_type.lower() - in mfstructure.MFStructure().flopy_dict["solution_packages"] - and pname in self.package_name_dict - ): - if ( - self.simulation_data.verbosity_level.value - >= VerbosityLevel.normal.value - ): - print( - "WARNING: Package with name " - f"{package.package_name.lower()} already exists. " - "Replacing existing package." - ) - self._remove_package(self.package_name_dict[pname]) - else: - if ( - package.filename in self._other_files - and self._other_files[package.filename] in self._packagelist - ): - # other package with same file name already exists. remove old - # package - if ( - self.simulation_data.verbosity_level.value - >= VerbosityLevel.normal.value - ): - print( - f"WARNING: package with name {pname} already exists. " - "Replacing existing package." - ) - self._remove_package(self._other_files[package.filename]) - del self._other_files[package.filename] - - def register_package( - self, - package, - add_to_package_list=True, - set_package_name=True, - set_package_filename=True, - ): - """ - Register a package file with the simulation. This is a - call-back method made from the package and should not be called - directly. - - Parameters - ---------- - package : MFPackage - Package to register - add_to_package_list : bool - Add package to lookup list - set_package_name : bool - Produce a package name for this package - set_package_filename : bool - Produce a filename for this package - - Returns - -------- - (path : tuple, package structure : MFPackageStructure) - - """ - if set_package_filename: - # set initial package filename - base_name = os.path.basename(os.path.normpath(self.name)) - package._filename = f"{base_name}.{package.package_type}" - - package.container_type = [PackageContainerType.simulation] - path = self._get_package_path(package) - if add_to_package_list and package.package_type.lower != "nam": - if ( - package.package_type.lower() - not in mfstructure.MFStructure().flopy_dict[ - "solution_packages" - ] - ): - # all but solution packages get added here. solution packages - # are added during solution package registration - self._remove_package_by_type(package) - self._add_package(package, path) - sln_dict = mfstructure.MFStructure().flopy_dict["solution_packages"] - if package.package_type.lower() == "nam": - if not package.internal_package: - excpt_str = ( - "Unable to register nam file. Do not create your own nam " - "files. Nam files are automatically created and managed " - "for you by FloPy." - ) - print(excpt_str) - raise FlopyException(excpt_str) - return path, self.structure.name_file_struct_obj - elif package.package_type.lower() == "tdis": - self._tdis_file = package - self._set_timing_block(package.quoted_filename) - return ( - path, - self.structure.package_struct_objs[ - package.package_type.lower() - ], - ) - elif package.package_type.lower() in sln_dict: - supported_packages = sln_dict[package.package_type.lower()] - # default behavior is to register the solution package with the - # first unregistered model - unregistered_models = [] - for model_name, model in self._models.items(): - model_registered = self._is_in_solution_group( - model_name, 2, True - ) - if not model_registered and ( - model.model_type in supported_packages - or "*" in supported_packages - ): - unregistered_models.append(model_name) - if unregistered_models: - self.register_solution_package(package, unregistered_models) - else: - self.register_solution_package(package, None) - return ( - path, - self.structure.package_struct_objs[ - package.package_type.lower() - ], - ) - else: - if package.filename not in self._other_files: - self._other_files[package.filename] = package - else: - # auto generate a unique file name and register it - file_name = MFFileMgmt.unique_file_name( - package.filename, self._other_files - ) - package.filename = file_name - self._other_files[file_name] = package - - if package.package_type.lower() in self.structure.package_struct_objs: - return ( - path, - self.structure.package_struct_objs[ - package.package_type.lower() - ], - ) - elif package.package_type.lower() in self.structure.utl_struct_objs: - return ( - path, - self.structure.utl_struct_objs[package.package_type.lower()], - ) - else: - excpt_str = ( - 'Invalid package type "{}". Unable to register ' - "package.".format(package.package_type) - ) - print(excpt_str) - raise FlopyException(excpt_str) - - def rename_model_namefile(self, model, new_namefile): - """ - Rename a model's namefile. For internal flopy library use only. - - Parameters - ---------- - model : MFModel - Model object whose namefile to rename - new_namefile : str - Name of the new namefile - - """ - # update simulation name file - models = self.name_file.models.get_data() - for mdl in models: - path, name_file_name = os.path.split(mdl[1]) - if name_file_name == model.name_file.filename: - mdl[1] = os.path.join(path, new_namefile) - self.name_file.models.set_data(models) - - def register_model(self, model, model_type, model_name, model_namefile): - """ - Add a model to the simulation. This is a call-back method made - from the package and should not be called directly. - - Parameters - ---------- - model : MFModel - Model object to add to simulation - sln_group : str - Solution group of model - - Returns - -------- - model_structure_object : MFModelStructure - """ - - # get model structure from model type - if model_type not in self.structure.model_struct_objs: - message = f'Invalid model type: "{model_type}".' - type_, value_, traceback_ = sys.exc_info() - raise MFDataException( - model.name, - "", - model.name, - "registering model", - "sim", - inspect.stack()[0][3], - type_, - value_, - traceback_, - message, - self.simulation_data.debug, - ) - - # add model - self._models[model_name] = model - - # update simulation name file - self.name_file.models.append_list_as_record( - [model_type, model_namefile, model_name] - ) - - if len(self._solution_files) > 0: - # register model with first solution file found - first_solution_key = next(iter(self._solution_files)) - self.register_solution_package( - self._solution_files[first_solution_key], model_name - ) - - return self.structure.model_struct_objs[model_type] - - def get_ims_package(self, key): - warnings.warn( - "get_ims_package() has been deprecated and will be " - "removed in version 3.3.7. Use " - "get_solution_package() instead.", - DeprecationWarning, - ) - return self.get_solution_package(key) - - def get_solution_package(self, key): - """ - Get the solution package with the specified `key`. - - Parameters - ---------- - key : str - solution package file name - - Returns - -------- - solution_package : MFPackage - - """ - if key in self._solution_files: - return self._solution_files[key] - return None - - def remove_model(self, model_name): - """ - Remove model with name `model_name` from the simulation - - Parameters - ---------- - model_name : str - Model name to remove from simulation - - """ - # Remove model - del self._models[model_name] - - # TODO: Fully implement this - # Update simulation name file - - def is_valid(self): - """ - Checks the validity of the solution and all of its models and - packages. Returns true if the solution is valid, false if it is not. - - - Returns - -------- - valid : bool - Whether this is a valid simulation - - """ - # name file valid - if not self.name_file.is_valid(): - return False - - # tdis file valid - if not self._tdis_file.is_valid(): - return False - - # exchanges valid - for exchange in self._exchange_files: - if not exchange.is_valid(): - return False - - # solution files valid - for solution_file in self._solution_files.values(): - if not solution_file.is_valid(): - return False - - # a model exists - if not self._models: - return False - - # models valid - for key in self._models: - if not self._models[key].is_valid(): - return False - - # each model has a solution file - - return True - - @staticmethod - def _resolve_verbosity_level(verbosity_level): - if verbosity_level == 0: - return VerbosityLevel.quiet - elif verbosity_level == 1: - return VerbosityLevel.normal - elif verbosity_level == 2: - return VerbosityLevel.verbose - else: - return verbosity_level - - @staticmethod - def _get_package_path(package): - if package.parent_file is not None: - return (package.parent_file.path) + (package.package_type,) - else: - return (package.package_type,) - - def _update_solution_group(self, solution_file, new_name=None): - solution_recarray = self.name_file.solutiongroup - for solution_group_num in solution_recarray.get_active_key_list(): - try: - rec_array = solution_recarray.get_data(solution_group_num[0]) - except MFDataException as mfde: - message = ( - "An error occurred while getting solution group" - '"{}" from the simulation name file' - ".".format(solution_group_num[0]) - ) - raise MFDataException( - mfdata_except=mfde, package="nam", message=message - ) - - new_array = [] - for record in rec_array: - if record.slnfname == solution_file: - if new_name is not None: - record.slnfname = new_name - new_array.append(tuple(record)) - else: - continue - else: - new_array.append(record) - - if not new_array: - new_array = None - - solution_recarray.set_data(new_array, solution_group_num[0]) - - def _remove_from_all_solution_groups(self, modelname): - solution_recarray = self.name_file.solutiongroup - for solution_group_num in solution_recarray.get_active_key_list(): - try: - rec_array = solution_recarray.get_data(solution_group_num[0]) - except MFDataException as mfde: - message = ( - "An error occurred while getting solution group" - '"{}" from the simulation name file' - ".".format(solution_group_num[0]) - ) - raise MFDataException( - mfdata_except=mfde, package="nam", message=message - ) - new_array = ["no_check"] - for index, record in enumerate(rec_array): - new_record = [] - new_record.append(record[0]) - new_record.append(record[1]) - for item in list(record)[2:]: - if item is not None and item.lower() != modelname.lower(): - new_record.append(item) - new_array.append(tuple(new_record)) - solution_recarray.set_data(new_array, solution_group_num[0]) - - def _append_to_solution_group(self, solution_file, new_models): - # clear models out of solution groups - if new_models is not None: - for model in new_models: - self._remove_from_all_solution_groups(model) - - # append models to solution_file - solution_recarray = self.name_file.solutiongroup - for solution_group_num in solution_recarray.get_active_key_list(): - try: - rec_array = solution_recarray.get_data(solution_group_num[0]) - except MFDataException as mfde: - message = ( - "An error occurred while getting solution group" - '"{}" from the simulation name file' - ".".format(solution_group_num[0]) - ) - raise MFDataException( - mfdata_except=mfde, package="nam", message=message - ) - new_array = [] - for index, record in enumerate(rec_array): - new_record = [] - rec_model_dict = {} - for index, item in enumerate(record): - if ( - record[1] == solution_file or item not in new_models - ) and item is not None: - new_record.append(item) - if index > 1 and item is not None: - rec_model_dict[item.lower()] = 1 - - if record[1] == solution_file: - for model in new_models: - if model.lower() not in rec_model_dict: - new_record.append(model) - - new_array.append(tuple(new_record)) - solution_recarray.set_data(new_array, solution_group_num[0]) - - def _replace_solution_in_solution_group(self, item, index, new_item): - solution_recarray = self.name_file.solutiongroup - for solution_group_num in solution_recarray.get_active_key_list(): - try: - rec_array = solution_recarray.get_data(solution_group_num[0]) - except MFDataException as mfde: - message = ( - "An error occurred while getting solution group" - '"{}" from the simulation name file. The error ' - 'occurred while replacing solution file "{}" with "{}"' - 'at index "{}"'.format( - solution_group_num[0], item, new_item, index - ) - ) - raise MFDataException( - mfdata_except=mfde, package="nam", message=message - ) - if rec_array is not None: - for rec_item in rec_array: - if rec_item[index] == item: - rec_item[index] = new_item - - def _is_in_solution_group(self, item, index, any_idx_after=False): - solution_recarray = self.name_file.solutiongroup - for solution_group_num in solution_recarray.get_active_key_list(): - try: - rec_array = solution_recarray.get_data(solution_group_num[0]) - except MFDataException as mfde: - message = ( - "An error occurred while getting solution group" - '"{}" from the simulation name file. The error ' - 'occurred while verifying file "{}" at index "{}" ' - "is in the simulation name file" - ".".format(solution_group_num[0], item, index) - ) - raise MFDataException( - mfdata_except=mfde, package="nam", message=message - ) - - if rec_array is not None: - for rec_item in rec_array: - if any_idx_after: - for idx in range(index, len(rec_item)): - if rec_item[idx] == item: - return True - else: - if rec_item[index] == item: - return True - return False - - def plot( - self, - model_list: Optional[Union[str, List[str]]] = None, - SelPackList=None, - **kwargs, - ): - """ - Plot simulation or models. - - Method to plot a whole simulation or a series of models - that are part of a simulation. - - Parameters - ---------- - model_list: list, optional - List of model names to plot, if none all models will be plotted - SelPackList: list, optional - List of package names to plot, if none all packages will be - plotted - kwargs: - filename_base : str - Base file name that will be used to automatically - generate file names for output image files. Plots will be - exported as image files if file_name_base is not None. - (default is None) - file_extension : str - Valid matplotlib.pyplot file extension for savefig(). - Only used if filename_base is not None. (default is 'png') - mflay : int - MODFLOW zero-based layer number to return. If None, then - all layers will be included. (default is None) - kper : int - MODFLOW zero-based stress period number to return. - (default is zero) - key : str - MFList dictionary key. (default is None) - - Returns - -------- - axes: (list) - matplotlib.pyplot.axes objects - - """ - from ...plot.plotutil import PlotUtilities - - axes = PlotUtilities._plot_simulation_helper( - self, model_list=model_list, SelPackList=SelPackList, **kwargs + load_only, + verify_data, + write_headers, + lazy_io, ) - return axes diff --git a/flopy/mf6/utils/createpackages.py b/flopy/mf6/utils/createpackages.py index 4e5379da61..32b721b3d2 100644 --- a/flopy/mf6/utils/createpackages.py +++ b/flopy/mf6/utils/createpackages.py @@ -295,6 +295,7 @@ def create_package_init_var( def add_var( init_vars, class_vars, + options_param_list, init_param_list, package_properties, doc_string, @@ -324,6 +325,8 @@ def add_var( if default_value is None: default_value = "None" init_param_list.append(f"{clean_ds_name}={default_value}") + if path is not None and "options" in path: + options_param_list.append(f"{clean_ds_name}={default_value}") # add to set parameter list set_param_list.append(f"{clean_ds_name}={clean_ds_name}") else: @@ -404,7 +407,7 @@ def build_model_load(model_type): " exe_name='mf6', strict=True, " "model_rel_path='.',\n" " load_only=None):\n " - "return mfmodel.MFModel.load_base(simulation, structure, " + "return mfmodel.MFModel.load_base(cls, simulation, structure, " "modelname,\n " "model_nam_file, '{}6', version,\n" " exe_name, strict, " @@ -415,13 +418,54 @@ def build_model_load(model_type): return model_load, model_load_c +def build_sim_load(): + sim_load_c = ( + " Methods\n -------\n" + " load : (sim_name : str, version : " + "string,\n exe_name : str or PathLike, " + "sim_ws : str or PathLike, strict : bool,\n verbosity_level : " + "int, load_only : list, verify_data : bool,\n " + "write_headers : bool, lazy_io : bool) : MFSimulation\n" + " a class method that loads a simulation from files" + '\n """' + ) + + sim_load = ( + " @classmethod\n def load(cls, sim_name='modflowsim', " + "version='mf6',\n " + "exe_name: Union[str, os.PathLike] = 'mf6',\n " + "sim_ws: Union[str, os.PathLike] = os.curdir,\n " + "strict=True, verbosity_level=1, load_only=None,\n " + "verify_data=False, write_headers=True,\n " + "lazy_io=False,):\n " + "return mfsimbase.MFSimulationBase.load(cls, sim_name, version, " + "\n " + "exe_name, sim_ws, strict,\n" + " verbosity_level, " + "load_only,\n " + "verify_data, write_headers, " + "\n lazy_io)" + "\n" + ) + return sim_load, sim_load_c + + def build_model_init_vars(param_list): init_var_list = [] + # build set data calls for param in param_list: param_parts = param.split("=") init_var_list.append( f" self.name_file.{param_parts[0]}.set_data({param_parts[0]})" ) + init_var_list.append("") + # build attributes + for param in param_list: + param_parts = param.split("=") + init_var_list.append( + f" self.{param_parts[0]} = self.name_file.{param_parts[0]}" + ) + return "\n".join(init_var_list) @@ -513,6 +557,7 @@ def create_packages(): package_properties = [] init_vars = [] init_param_list = [] + options_param_list = [] set_param_list = [] class_vars = [] template_gens = [] @@ -569,6 +614,7 @@ def create_packages(): add_var( init_vars, None, + options_param_list, init_param_list, package_properties, doc_string, @@ -589,6 +635,7 @@ def create_packages(): add_var( init_vars, None, + options_param_list, init_param_list, package_properties, doc_string, @@ -610,6 +657,7 @@ def create_packages(): add_var( init_vars, None, + options_param_list, init_param_list, package_properties, doc_string, @@ -640,6 +688,7 @@ def create_packages(): tg = add_var( init_vars, class_vars, + options_param_list, init_param_list, package_properties, doc_string, @@ -698,7 +747,7 @@ def create_packages(): ) ) init_string_full = init_string_def - init_string_model = f"{init_string_def}, simulation" + init_string_sim = f"{init_string_def}, simulation" # add variables to init string doc_string.add_parameter( " loading_package : bool\n " @@ -871,21 +920,20 @@ def create_packages(): if package[0].dfn_type == mfstructure.DfnType.model_name_file: # build model file - model_param_list = init_param_list[:-3] - init_vars = build_model_init_vars(model_param_list) - - model_param_list.insert(0, "model_rel_path='.'") - model_param_list.insert(0, "exe_name='mf6'") - model_param_list.insert(0, "version='mf6'") - model_param_list.insert(0, "model_nam_file=None") - model_param_list.insert(0, "modelname='model'") - model_param_list.append("**kwargs,") - init_string_model = build_init_string( - init_string_model, model_param_list - ) - model_name = clean_class_string(package[2]) + init_vars = build_model_init_vars(options_param_list) + + options_param_list.insert(0, "model_rel_path='.'") + options_param_list.insert(0, "exe_name='mf6'") + options_param_list.insert(0, "version='mf6'") + options_param_list.insert(0, "model_nam_file=None") + options_param_list.insert(0, "modelname='model'") + options_param_list.append("**kwargs,") + init_string_sim = build_init_string( + init_string_sim, options_param_list + ) + sim_name = clean_class_string(package[2]) class_def_string = "class Modflow{}(mfmodel.MFModel):\n".format( - model_name.capitalize() + sim_name.capitalize() ) class_def_string = class_def_string.replace("-", "_") doc_string.add_parameter( @@ -898,9 +946,9 @@ def create_packages(): model_parameter=True, ) doc_string.description = ( - f"Modflow{model_name} defines a {model_name} model" + f"Modflow{sim_name} defines a {sim_name} model" ) - class_var_string = f" model_type = '{model_name}'\n" + class_var_string = f" model_type = '{sim_name}'\n" mparent_init_string = " super().__init__(" spaces = " " * len(mparent_init_string) mparent_init_string = ( @@ -912,7 +960,7 @@ def create_packages(): "**kwargs," ")\n".format( mparent_init_string, - model_name, + sim_name, spaces, spaces, spaces, @@ -920,7 +968,7 @@ def create_packages(): spaces, ) ) - load_txt, doc_text = build_model_load(model_name) + load_txt, doc_text = build_model_load(sim_name) package_string = "{}\n{}\n\n\n{}{}\n{}\n{}\n{}{}\n{}\n\n{}".format( comment_string, nam_import_string, @@ -928,21 +976,103 @@ def create_packages(): doc_string.get_doc_string(True), doc_text, class_var_string, - init_string_model, + init_string_sim, mparent_init_string, init_vars, load_txt, ) md_file = open( - os.path.join(util_path, "..", "modflow", f"mf{model_name}.py"), + os.path.join(util_path, "..", "modflow", f"mf{sim_name}.py"), "w", newline="\n", ) md_file.write(package_string) md_file.close() init_file_imports.append( - f"from .mf{model_name} import Modflow{model_name.capitalize()}\n" + f"from .mf{sim_name} import Modflow{sim_name.capitalize()}\n" + ) + elif package[0].dfn_type == mfstructure.DfnType.sim_name_file: + # build simulation file + init_vars = build_model_init_vars(options_param_list) + + options_param_list.insert(0, "lazy_io=False") + options_param_list.insert(0, "write_headers=True") + options_param_list.insert(0, "verbosity_level=1") + options_param_list.insert( + 0, "sim_ws: Union[str, os.PathLike] = " "os.curdir" + ) + options_param_list.insert( + 0, "exe_name: Union[str, os.PathLike] " '= "mf6"' + ) + options_param_list.insert(0, "version='mf6'") + options_param_list.insert(0, "sim_name='sim'") + init_string_sim = " def __init__(self" + init_string_sim = build_init_string( + init_string_sim, options_param_list + ) + class_def_string = ( + "class MFSimulation(mfsimbase." "MFSimulationBase):\n" + ) + doc_string.add_parameter( + " sim_name : str\n" " Name of the simulation", + beginning_of_list=True, + model_parameter=True, ) + doc_string.description = ( + "MFSimulation is used to load, build, and/or save a MODFLOW " + "6 simulation. \n A MFSimulation object must be created " + "before creating any of the MODFLOW 6 \n model objects." + ) + sparent_init_string = " super().__init__(" + spaces = " " * len(sparent_init_string) + sparent_init_string = ( + "{}sim_name=sim_name,\n{}" + "version=version,\n{}" + "exe_name=exe_name,\n{}" + "sim_ws=sim_ws,\n{}" + "verbosity_level=verbosity_level,\n{}" + "write_headers=write_headers,\n{}" + "lazy_io=lazy_io,\n{}" + ")\n".format( + sparent_init_string, + spaces, + spaces, + spaces, + spaces, + spaces, + spaces, + spaces, + ) + ) + sim_import_string = ( + "import os\n" + "from typing import Union\n" + "from .. import mfsimbase" + ) + + load_txt, doc_text = build_sim_load() + package_string = "{}\n{}\n\n\n{}{}\n{}\n{}{}\n{}\n\n{}".format( + comment_string, + sim_import_string, + class_def_string, + doc_string.get_doc_string(False, True), + doc_text, + init_string_sim, + sparent_init_string, + init_vars, + load_txt, + ) + sim_file = open( + os.path.join(util_path, "..", "modflow", f"mfsimulation.py"), + "w", + newline="\n", + ) + sim_file.write(package_string) + sim_file.close() + init_file_imports.append( + "from .mfsimulation import MFSimulation\n" + ) + # Sort the imports for line in sorted(init_file_imports, key=lambda x: x.split()[3]): init_file.write(line) diff --git a/flopy/mf6/utils/generate_classes.py b/flopy/mf6/utils/generate_classes.py index d3872f4375..ab078e33bb 100644 --- a/flopy/mf6/utils/generate_classes.py +++ b/flopy/mf6/utils/generate_classes.py @@ -100,7 +100,7 @@ def delete_mf6_classes(): for entry in os.listdir(pth) if os.path.isfile(os.path.join(pth, entry)) ] - delete_files(files, pth, exclude="mfsimulation.py") + delete_files(files, pth) def generate_classes( From a84d88596f8b8e8f9f8fa074ad8fce626a84ebd0 Mon Sep 17 00:00:00 2001 From: spaulins-usgs Date: Tue, 11 Jul 2023 11:53:38 -0700 Subject: [PATCH 006/101] fix(exchange and gnc package cellids): #1866 (#1871) * fix(exchange and gnc package cellids): #1866 * merge --------- Co-authored-by: scottrp <45947939+scottrp@users.noreply.github.com> --- autotest/regression/test_mf6.py | 189 +++++++++++++++++- .../test006_2models_diff_dis/cell2d.txt | 121 +++++++++++ .../test006_2models_diff_dis/exg.txt | 9 + .../test006_2models_diff_dis/gnc.txt | 9 + .../test006_2models_diff_dis/vertices.txt | 148 ++++++++++++++ flopy/mf6/coordinates/modeldimensions.py | 45 +++-- flopy/mf6/data/mfdatalist.py | 21 +- flopy/mf6/data/mfdatascalar.py | 52 ++--- flopy/mf6/data/mfdatastorage.py | 34 ++-- flopy/mf6/data/mfdatautil.py | 58 +++--- flopy/mf6/data/mffileaccess.py | 12 +- flopy/mf6/mfsimbase.py | 1 + flopy/mf6/utils/testutils.py | 31 ++- flopy/utils/datautil.py | 20 ++ 14 files changed, 640 insertions(+), 110 deletions(-) create mode 100644 examples/data/mf6/create_tests/test006_2models_diff_dis/cell2d.txt create mode 100644 examples/data/mf6/create_tests/test006_2models_diff_dis/exg.txt create mode 100644 examples/data/mf6/create_tests/test006_2models_diff_dis/gnc.txt create mode 100644 examples/data/mf6/create_tests/test006_2models_diff_dis/vertices.txt diff --git a/autotest/regression/test_mf6.py b/autotest/regression/test_mf6.py index a8be56ac57..3ffc52f9a0 100644 --- a/autotest/regression/test_mf6.py +++ b/autotest/regression/test_mf6.py @@ -918,10 +918,7 @@ def test021_twri(function_tmpdir, example_data_path): version="mf6", exe_name="mf6", sim_ws=data_folder, - memory_print_option="SUMMARY", ) - sim.nocheck = True - sim.set_sim_path(function_tmpdir) tdis_rc = [(86400.0, 1, 1.0)] tdis_package = ModflowTdis( @@ -930,8 +927,6 @@ def test021_twri(function_tmpdir, example_data_path): model = ModflowGwf( sim, modelname=model_name, model_nam_file=f"{model_name}.nam" ) - model.print_input = True - ims_package = ModflowIms( sim, print_option="SUMMARY", @@ -1103,9 +1098,6 @@ def test021_twri(function_tmpdir, example_data_path): strt2 = ic2.strt.get_data() drn2 = model2.get_package("drn") drn_spd = drn2.stress_period_data.get_data() - assert sim2.memory_print_option.get_data().lower() == "summary" - assert sim2.nocheck.get_data() is True - assert model2.print_input.get_data() is True assert strt2[0, 0, 0] == 0.0 assert strt2[1, 0, 0] == 1.0 assert strt2[2, 0, 0] == 2.0 @@ -3608,6 +3600,187 @@ def test005_advgw_tidal(function_tmpdir, example_data_path): ) +@requires_exe("mf6") +@pytest.mark.regression +def test006_2models_different_dis(function_tmpdir, example_data_path): + # init paths + test_ex_name = "test006_2models_diff_dis" + model_name_1 = "model1" + model_name_2 = "model2" + pth = example_data_path / "mf6" / "create_tests" / test_ex_name + + expected_output_folder = os.path.join(pth, "expected_output") + expected_head_file_1 = os.path.join(expected_output_folder, "model1.hds") + expected_head_file_2 = os.path.join(expected_output_folder, "model2.hds") + + # create simulation + sim = MFSimulation( + sim_name=test_ex_name, version="mf6", exe_name="mf6", sim_ws=pth + ) + tdis_rc = [(1.0, 1, 1.0)] + tdis_package = ModflowTdis( + sim, time_units="DAYS", nper=1, perioddata=tdis_rc + ) + model_1 = ModflowGwf( + sim, + modelname=model_name_1, + model_nam_file=f"{model_name_1}.nam", + ) + model_2 = ModflowGwf( + sim, + modelname=model_name_2, + model_nam_file=f"{model_name_2}.nam", + ) + ims_package = ModflowIms( + sim, + print_option="SUMMARY", + outer_dvclose=0.00000001, + outer_maximum=1000, + under_relaxation="NONE", + inner_maximum=1000, + inner_dvclose=0.00000001, + rcloserecord=0.01, + linear_acceleration="BICGSTAB", + scaling_method="NONE", + reordering_method="NONE", + relaxation_factor=0.97, + ) + sim.register_ims_package(ims_package, [model_1.name, model_2.name]) + dis_package = ModflowGwfdis( + model_1, + length_units="METERS", + nlay=1, + nrow=7, + ncol=7, + idomain=1, + delr=100.0, + delc=100.0, + top=0.0, + botm=-100.0, + filename=f"{model_name_1}.dis", + ) + + vertices = testutils.read_vertices(os.path.join(pth, "vertices.txt")) + c2drecarray = testutils.read_cell2d(os.path.join(pth, "cell2d.txt")) + disv_package = ModflowGwfdisv( + model_2, + ncpl=121, + nlay=1, + nvert=148, + top=0.0, + botm=-40.0, + idomain=1, + vertices=vertices, + cell2d=c2drecarray, + filename=f"{model_name_2}.disv", + ) + ic_package_1 = ModflowGwfic( + model_1, strt=1.0, filename=f"{model_name_1}.ic" + ) + ic_package_2 = ModflowGwfic( + model_2, strt=1.0, filename=f"{model_name_2}.ic" + ) + npf_package_1 = ModflowGwfnpf( + model_1, save_flows=True, perched=True, icelltype=0, k=1.0, k33=1.0 + ) + npf_package_2 = ModflowGwfnpf( + model_2, save_flows=True, perched=True, icelltype=0, k=1.0, k33=1.0 + ) + oc_package_1 = ModflowGwfoc( + model_1, + budget_filerecord="model1.cbc", + head_filerecord="model1.hds", + saverecord=[("HEAD", "ALL"), ("BUDGET", "ALL")], + printrecord=[("HEAD", "ALL"), ("BUDGET", "ALL")], + ) + oc_package_2 = ModflowGwfoc( + model_2, + budget_filerecord="model2.cbc", + head_filerecord="model2.hds", + saverecord=[("HEAD", "ALL"), ("BUDGET", "ALL")], + printrecord=[("HEAD", "ALL"), ("BUDGET", "ALL")], + ) + + # build periodrecarray for chd package + set_1 = [0, 7, 14, 18, 22, 26, 33] + set_2 = [6, 13, 17, 21, 25, 32, 39] + stress_period_data = [] + for value in range(0, 7): + stress_period_data.append(((0, value, 0), 1.0)) + for value in range(0, 7): + stress_period_data.append(((0, value, 6), 0.0)) + chd_package = ModflowGwfchd( + model_1, + print_input=True, + print_flows=True, + save_flows=True, + maxbound=30, + stress_period_data=stress_period_data, + ) + exgrecarray = testutils.read_exchangedata( + os.path.join(pth, "exg.txt"), 3, 2 + ) + + # build obs dictionary + gwf_obs = { + ("gwfgwf_obs.csv"): [ + ("gwf-1-3-2_1-1-1", "flow-ja-face", (0, 2, 1), (0, 0, 0)), + ("gwf-1-3-2_1-2-1", "flow-ja-face", (0, 2, 1), (0, 1, 0)), + ] + } + + exg_package = ModflowGwfgwf( + sim, + print_input=True, + print_flows=True, + save_flows=True, + auxiliary="testaux", + nexg=9, + exchangedata=exgrecarray, + exgtype="gwf6-gwf6", + exgmnamea=model_name_1, + exgmnameb=model_name_2, + observations=gwf_obs, + ) + + gnc_path = os.path.join("gnc", "test006_2models_gnc.gnc") + gncrecarray = testutils.read_gncrecarray( + os.path.join(pth, "gnc.txt"), 3, 2 + ) + gnc_package = exg_package.gnc.initialize( + filename=gnc_path, + print_input=True, + print_flows=True, + numgnc=9, + numalphaj=1, + gncdata=gncrecarray, + ) + + # change folder to save simulation + sim.set_sim_path(function_tmpdir) + + # write simulation to new location + sim.write_simulation() + # run simulation + success, buff = sim.run_simulation() + assert success + + sim2 = MFSimulation.load(sim_ws=sim.sim_path) + exh = sim2.get_package("gwfgwf") + exh_data = exh.exchangedata.get_data() + assert exh_data[0][0] == (0, 2, 1) + assert exh_data[0][1] == (0, 0) + assert exh_data[3][0] == (0, 3, 1) + assert exh_data[3][1] == (0, 3) + gnc = sim2.get_package("gnc") + gnc_data = gnc.gncdata.get_data() + assert gnc_data[0][0] == (0, 2, 1) + assert gnc_data[0][1] == (0, 0) + assert gnc_data[0][2] == (0, 1, 1) + + sim.delete_output_files() + + @requires_exe("mf6") @pytest.mark.regression def test006_gwf3(function_tmpdir, example_data_path): diff --git a/examples/data/mf6/create_tests/test006_2models_diff_dis/cell2d.txt b/examples/data/mf6/create_tests/test006_2models_diff_dis/cell2d.txt new file mode 100644 index 0000000000..7e65843dfe --- /dev/null +++ b/examples/data/mf6/create_tests/test006_2models_diff_dis/cell2d.txt @@ -0,0 +1,121 @@ +1 50.0 650.0 5 1 2 10 9 1 +2 150.0 650.0 5 2 3 11 10 2 +3 250.0 650.0 5 3 4 12 11 3 +4 350.0 650.0 5 4 5 13 12 4 +5 450.0 650.0 5 5 6 14 13 5 +6 550.0 650.0 5 6 7 15 14 6 +7 650.0 650.0 5 7 8 16 15 7 +8 50.0 550.0 5 9 10 18 17 9 +9 150.0 550.0 5 10 11 19 18 10 +10 250.0 550.0 7 11 12 22 21 20 19 11 +11 350.0 550.0 7 12 13 25 24 23 22 12 +12 450.0 550.0 7 13 14 28 27 26 25 13 +13 550.0 550.0 5 14 15 29 28 14 +14 650.0 550.0 5 15 16 30 29 15 +15 50.0 450.0 5 17 18 52 51 17 +16 150.0 450.0 7 18 19 31 41 53 52 18 +17 550.0 450.0 7 28 29 63 62 50 40 28 +18 650.0 450.0 5 29 30 64 63 29 +19 50.0 350.0 5 51 52 86 85 51 +20 150.0 350.0 7 52 53 65 75 87 86 52 +21 550.0 350.0 7 62 63 97 96 84 74 62 +22 650.0 350.0 5 63 64 98 97 63 +23 50.0 250.0 5 85 86 120 119 85 +24 150.0 250.0 7 86 87 99 109 121 120 86 +25 550.0 250.0 7 96 97 131 130 118 108 96 +26 650.0 250.0 5 97 98 132 131 97 +27 50.0 150.0 5 119 120 134 133 119 +28 150.0 150.0 5 120 121 135 134 120 +29 250.0 150.0 7 121 122 123 124 136 135 121 +30 350.0 150.0 7 124 125 126 127 137 136 124 +31 450.0 150.0 7 127 128 129 130 138 137 127 +32 550.0 150.0 5 130 131 139 138 130 +33 650.0 150.0 5 131 132 140 139 131 +34 50.0 50.0 5 133 134 142 141 133 +35 150.0 50.0 5 134 135 143 142 134 +36 250.0 50.0 5 135 136 144 143 135 +37 350.0 50.0 5 136 137 145 144 136 +38 450.0 50.0 5 137 138 146 145 137 +39 550.0 50.0 5 138 139 147 146 138 +40 650.0 50.0 5 139 140 148 147 139 +41 216.666666667 483.333333333 5 19 20 32 31 19 +42 250.0 483.333333333 5 20 21 33 32 20 +43 283.333333333 483.333333333 5 21 22 34 33 21 +44 316.666666667 483.333333333 5 22 23 35 34 22 +45 350.0 483.333333333 5 23 24 36 35 23 +46 383.333333333 483.333333333 5 24 25 37 36 24 +47 416.666666667 483.333333333 5 25 26 38 37 25 +48 450.0 483.333333333 5 26 27 39 38 26 +49 483.333333333 483.333333333 5 27 28 40 39 27 +50 216.666666667 450.0 5 31 32 42 41 31 +51 250.0 450.0 5 32 33 43 42 32 +52 283.333333333 450.0 5 33 34 44 43 33 +53 316.666666667 450.0 5 34 35 45 44 34 +54 350.0 450.0 5 35 36 46 45 35 +55 383.333333333 450.0 5 36 37 47 46 36 +56 416.666666667 450.0 5 37 38 48 47 37 +57 450.0 450.0 5 38 39 49 48 38 +58 483.333333333 450.0 5 39 40 50 49 39 +59 216.666666667 416.666666667 5 41 42 54 53 41 +60 250.0 416.666666667 5 42 43 55 54 42 +61 283.333333333 416.666666667 5 43 44 56 55 43 +62 316.666666667 416.666666667 5 44 45 57 56 44 +63 350.0 416.666666667 5 45 46 58 57 45 +64 383.333333333 416.666666667 5 46 47 59 58 46 +65 416.666666667 416.666666667 5 47 48 60 59 47 +66 450.0 416.666666667 5 48 49 61 60 48 +67 483.333333333 416.666666667 5 49 50 62 61 49 +68 216.666666667 383.333333333 5 53 54 66 65 53 +69 250.0 383.333333333 5 54 55 67 66 54 +70 283.333333333 383.333333333 5 55 56 68 67 55 +71 316.666666667 383.333333333 5 56 57 69 68 56 +72 350.0 383.333333333 5 57 58 70 69 57 +73 383.333333333 383.333333333 5 58 59 71 70 58 +74 416.666666667 383.333333333 5 59 60 72 71 59 +75 450.0 383.333333333 5 60 61 73 72 60 +76 483.333333333 383.333333333 5 61 62 74 73 61 +77 216.666666667 350.0 5 65 66 76 75 65 +78 250.0 350.0 5 66 67 77 76 66 +79 283.333333333 350.0 5 67 68 78 77 67 +80 316.666666667 350.0 5 68 69 79 78 68 +81 350.0 350.0 5 69 70 80 79 69 +82 383.333333333 350.0 5 70 71 81 80 70 +83 416.666666667 350.0 5 71 72 82 81 71 +84 450.0 350.0 5 72 73 83 82 72 +85 483.333333333 350.0 5 73 74 84 83 73 +86 216.666666667 316.666666667 5 75 76 88 87 75 +87 250.0 316.666666667 5 76 77 89 88 76 +88 283.333333333 316.666666667 5 77 78 90 89 77 +89 316.666666667 316.666666667 5 78 79 91 90 78 +90 350.0 316.666666667 5 79 80 92 91 79 +91 383.333333333 316.666666667 5 80 81 93 92 80 +92 416.666666667 316.666666667 5 81 82 94 93 81 +93 450.0 316.666666667 5 82 83 95 94 82 +94 483.333333333 316.666666667 5 83 84 96 95 83 +95 216.666666667 283.333333333 5 87 88 100 99 87 +96 250.0 283.333333333 5 88 89 101 100 88 +97 283.333333333 283.333333333 5 89 90 102 101 89 +98 316.666666667 283.333333333 5 90 91 103 102 90 +99 350.0 283.333333333 5 91 92 104 103 91 +100 383.333333333 283.333333333 5 92 93 105 104 92 +101 416.666666667 283.333333333 5 93 94 106 105 93 +102 450.0 283.333333333 5 94 95 107 106 94 +103 483.333333333 283.333333333 5 95 96 108 107 95 +104 216.666666667 250.0 5 99 100 110 109 99 +105 250.0 250.0 5 100 101 111 110 100 +106 283.333333333 250.0 5 101 102 112 111 101 +107 316.666666667 250.0 5 102 103 113 112 102 +108 350.0 250.0 5 103 104 114 113 103 +109 383.333333333 250.0 5 104 105 115 114 104 +110 416.666666667 250.0 5 105 106 116 115 105 +111 450.0 250.0 5 106 107 117 116 106 +112 483.333333333 250.0 5 107 108 118 117 107 +113 216.666666667 216.666666667 5 109 110 122 121 109 +114 250.0 216.666666667 5 110 111 123 122 110 +115 283.333333333 216.666666667 5 111 112 124 123 111 +116 316.666666667 216.666666667 5 112 113 125 124 112 +117 350.0 216.666666667 5 113 114 126 125 113 +118 383.333333333 216.666666667 5 114 115 127 126 114 +119 416.666666667 216.666666667 5 115 116 128 127 115 +120 450.0 216.666666667 5 116 117 129 128 116 +121 483.333333333 216.666666667 5 117 118 130 129 117 \ No newline at end of file diff --git a/examples/data/mf6/create_tests/test006_2models_diff_dis/exg.txt b/examples/data/mf6/create_tests/test006_2models_diff_dis/exg.txt new file mode 100644 index 0000000000..29122b2399 --- /dev/null +++ b/examples/data/mf6/create_tests/test006_2models_diff_dis/exg.txt @@ -0,0 +1,9 @@ + 1 3 2 1 1 1 50. 16.67 33.33 100.99 + 1 3 2 1 2 1 50. 16.67 33.33 100.99 + 1 3 2 1 3 1 50. 16.67 33.33 100.99 + 1 4 2 1 4 1 50. 16.67 33.33 100.99 + 1 4 2 1 5 1 50. 16.67 33.33 100.99 + 1 4 2 1 6 1 50. 16.67 33.33 100.99 + 1 5 2 1 7 1 50. 16.67 33.33 100.99 + 1 5 2 1 8 1 50. 16.67 33.33 100.99 + 1 5 2 1 9 1 50. 16.67 33.33 100.99 diff --git a/examples/data/mf6/create_tests/test006_2models_diff_dis/gnc.txt b/examples/data/mf6/create_tests/test006_2models_diff_dis/gnc.txt new file mode 100644 index 0000000000..1e139d0ac0 --- /dev/null +++ b/examples/data/mf6/create_tests/test006_2models_diff_dis/gnc.txt @@ -0,0 +1,9 @@ + 1 3 2 1 1 1 2 2 0.333333333333 + 1 3 2 1 2 0 0 0 0.333333333333 + 1 3 2 1 3 1 4 2 0.333333333333 + 1 4 2 1 4 1 3 2 0.333333333333 + 1 4 2 1 5 0 0 0 0.333333333333 + 1 4 2 1 6 1 5 2 0.333333333333 + 1 5 2 1 7 1 4 2 0.333333333333 + 1 5 2 1 8 0 0 0 0.333333333333 + 1 5 2 1 9 1 6 2 0.333333333333 diff --git a/examples/data/mf6/create_tests/test006_2models_diff_dis/vertices.txt b/examples/data/mf6/create_tests/test006_2models_diff_dis/vertices.txt new file mode 100644 index 0000000000..7afb23a513 --- /dev/null +++ b/examples/data/mf6/create_tests/test006_2models_diff_dis/vertices.txt @@ -0,0 +1,148 @@ +1 0.0 700.0 +2 100.0 700.0 +3 200.0 700.0 +4 300.0 700.0 +5 400.0 700.0 +6 500.0 700.0 +7 600.0 700.0 +8 700.0 700.0 +9 0.0 600.0 +10 100.0 600.0 +11 200.0 600.0 +12 300.0 600.0 +13 400.0 600.0 +14 500.0 600.0 +15 600.0 600.0 +16 700.0 600.0 +17 0.0 500.0 +18 100.0 500.0 +19 200.0 500.0 +20 233.333333333 500.0 +21 266.666666667 500.0 +22 300.0 500.0 +23 333.333333333 500.0 +24 366.666666667 500.0 +25 400.0 500.0 +26 433.333333333 500.0 +27 466.666666667 500.0 +28 500.0 500.0 +29 600.0 500.0 +30 700.0 500.0 +31 200.0 466.666666667 +32 233.333333333 466.666666667 +33 266.666666667 466.666666667 +34 300.0 466.666666667 +35 333.333333333 466.666666667 +36 366.666666667 466.666666667 +37 400.0 466.666666667 +38 433.333333333 466.666666667 +39 466.666666667 466.666666667 +40 500.0 466.666666667 +41 200.0 433.333333333 +42 233.333333333 433.333333333 +43 266.666666667 433.333333333 +44 300.0 433.333333333 +45 333.333333333 433.333333333 +46 366.666666667 433.333333333 +47 400.0 433.333333333 +48 433.333333333 433.333333333 +49 466.666666667 433.333333333 +50 500.0 433.333333333 +51 0.0 400.0 +52 100.0 400.0 +53 200.0 400.0 +54 233.333333333 400.0 +55 266.666666667 400.0 +56 300.0 400.0 +57 333.333333333 400.0 +58 366.666666667 400.0 +59 400.0 400.0 +60 433.333333333 400.0 +61 466.666666667 400.0 +62 500.0 400.0 +63 600.0 400.0 +64 700.0 400.0 +65 200.0 366.666666667 +66 233.333333333 366.666666667 +67 266.666666667 366.666666667 +68 300.0 366.666666667 +69 333.333333333 366.666666667 +70 366.666666667 366.666666667 +71 400.0 366.666666667 +72 433.333333333 366.666666667 +73 466.666666667 366.666666667 +74 500.0 366.666666667 +75 200.0 333.333333333 +76 233.333333333 333.333333333 +77 266.666666667 333.333333333 +78 300.0 333.333333333 +79 333.333333333 333.333333333 +80 366.666666667 333.333333333 +81 400.0 333.333333333 +82 433.333333333 333.333333333 +83 466.666666667 333.333333333 +84 500.0 333.333333333 +85 0.0 300.0 +86 100.0 300.0 +87 200.0 300.0 +88 233.333333333 300.0 +89 266.666666667 300.0 +90 300.0 300.0 +91 333.333333333 300.0 +92 366.666666667 300.0 +93 400.0 300.0 +94 433.333333333 300.0 +95 466.666666667 300.0 +96 500.0 300.0 +97 600.0 300.0 +98 700.0 300.0 +99 200.0 266.666666667 +100 233.333333333 266.666666667 +101 266.666666667 266.666666667 +102 300.0 266.666666667 +103 333.333333333 266.666666667 +104 366.666666667 266.666666667 +105 400.0 266.666666667 +106 433.333333333 266.666666667 +107 466.666666667 266.666666667 +108 500.0 266.666666667 +109 200.0 233.333333333 +110 233.333333333 233.333333333 +111 266.666666667 233.333333333 +112 300.0 233.333333333 +113 333.333333333 233.333333333 +114 366.666666667 233.333333333 +115 400.0 233.333333333 +116 433.333333333 233.333333333 +117 466.666666667 233.333333333 +118 500.0 233.333333333 +119 0.0 200.0 +120 100.0 200.0 +121 200.0 200.0 +122 233.333333333 200.0 +123 266.666666667 200.0 +124 300.0 200.0 +125 333.333333333 200.0 +126 366.666666667 200.0 +127 400.0 200.0 +128 433.333333333 200.0 +129 466.666666667 200.0 +130 500.0 200.0 +131 600.0 200.0 +132 700.0 200.0 +133 0.0 100.0 +134 100.0 100.0 +135 200.0 100.0 +136 300.0 100.0 +137 400.0 100.0 +138 500.0 100.0 +139 600.0 100.0 +140 700.0 100.0 +141 0.0 0.0 +142 100.0 0.0 +143 200.0 0.0 +144 300.0 0.0 +145 400.0 0.0 +146 500.0 0.0 +147 600.0 0.0 +148 700.0 0.0 \ No newline at end of file diff --git a/flopy/mf6/coordinates/modeldimensions.py b/flopy/mf6/coordinates/modeldimensions.py index fadb11a127..c41003592b 100644 --- a/flopy/mf6/coordinates/modeldimensions.py +++ b/flopy/mf6/coordinates/modeldimensions.py @@ -66,15 +66,17 @@ def unlock(self): self.locked = False self.package_dim.unlock() - def get_model_grid(self, data_item_num=None): + def get_model_grid(self, data_item_num=None, model_num=None): if self.locked: - if self.model_grid is None: + if self.model_grid is None or model_num is not None: self.model_grid = self.get_model_dim( - data_item_num + data_item_num, model_num ).get_model_grid() return self.model_grid else: - return self.get_model_dim(data_item_num).get_model_grid() + return self.get_model_dim( + data_item_num, model_num + ).get_model_grid() def get_data_shape( self, @@ -100,24 +102,37 @@ def model_subspace_size(self, subspace_string="", data_item_num=None): subspace_string ) - def get_model_dim(self, data_item_num): + def get_model_dim(self, data_item_num, model_num=None): if ( self.package_dim.model_dim is None - or data_item_num is None + or (data_item_num is None and model_num is None) or len(self.package_dim.model_dim) == 1 ): return self.package_dim.model_dim[0] else: - if not (len(self.structure.data_item_structures) > data_item_num): - raise FlopyException( - 'Data item index "{}" requested which ' - "is greater than the maximum index of" - "{}.".format( - data_item_num, - len(self.structure.data_item_structures) - 1, + if model_num is None: + model_num = self.structure.data_item_structures[data_item_num][ + -1 + ] + if not ( + len(self.structure.data_item_structures) > data_item_num + ): + raise FlopyException( + 'Data item index "{}" requested which ' + "is greater than the maximum index of" + "{}.".format( + data_item_num, + len(self.structure.data_item_structures) - 1, + ) ) - ) - model_num = self.structure.data_item_structures[data_item_num][-1] + else: + if not len(self.package_dim.model_dim) > model_num: + raise FlopyException( + f'Model item index "{model_num}" requested which ' + "is greater than the maximum index of" + f"{len(self.package_dim.model_dim)}." + ) + if DatumUtil.is_int(model_num): return self.package_dim.model_dim[int(model_num)] diff --git a/flopy/mf6/data/mfdatalist.py b/flopy/mf6/data/mfdatalist.py index a62916c32f..7cffa30bc0 100644 --- a/flopy/mf6/data/mfdatalist.py +++ b/flopy/mf6/data/mfdatalist.py @@ -7,7 +7,7 @@ from ...datbase import DataListInterface, DataType from ...mbase import ModelInterface -from ...utils import datautil +from ...utils.datautil import DatumUtil from ..data import mfdata, mfstructure from ..mfbase import ExtFileAction, MFDataException, VerbosityLevel from ..utils.mfenums import DiscretizationType @@ -1043,14 +1043,21 @@ def _get_file_entry_record( data_val = data_line[index] if data_item.is_cellid or ( data_item.possible_cellid - and storage._validate_cellid([data_val], 0) + and storage._validate_cellid( + [data_val], 0, data_item + ) ): if ( data_item.shape is not None and len(data_item.shape) > 0 and data_item.shape[0] == "ncelldim" ): - model_grid = data_dim.get_model_grid() + model_num = DatumUtil.cellid_model_num( + data_item, + self.structure.model_data, + self._data_dimensions.package_dim.model_dim, + ) + model_grid = data_dim.get_model_grid(model_num) cellid_size = ( model_grid.get_num_spatial_coordinates() ) @@ -1058,9 +1065,7 @@ def _get_file_entry_record( resolved_shape, cellid_size ) data_size = 1 - if len( - resolved_shape - ) == 1 and datautil.DatumUtil.is_int( + if len(resolved_shape) == 1 and DatumUtil.is_int( resolved_shape[0] ): data_size = int(resolved_shape[0]) @@ -1722,7 +1727,7 @@ def store_as_external_file( or replace_existing_external ): fname, ext = os.path.splitext(external_file_path) - if datautil.DatumUtil.is_int(sp): + if DatumUtil.is_int(sp): full_name = f"{fname}_{sp + 1}{ext}" else: full_name = f"{fname}_{sp}{ext}" @@ -1944,8 +1949,6 @@ def _set_data_record( self._simulation_data.debug, ) if key is None: - if data is None: - return # search for a key new_key_index = self.structure.first_non_keyword_index() if new_key_index is not None and len(data) > new_key_index: diff --git a/flopy/mf6/data/mfdatascalar.py b/flopy/mf6/data/mfdatascalar.py index 7c8421167d..622754f560 100644 --- a/flopy/mf6/data/mfdatascalar.py +++ b/flopy/mf6/data/mfdatascalar.py @@ -185,30 +185,34 @@ def set_data(self, data): ) and len(data) > 1: self._add_data_line_comment(data[1:], 0) storage = self._get_storage_obj() - data_struct = self.structure.data_item_structures[0] - try: - converted_data = convert_data( - data, self._data_dimensions, self._data_type, data_struct - ) - except Exception as ex: - type_, value_, traceback_ = sys.exc_info() - comment = ( - f'Could not convert data "{data}" to type "{self._data_type}".' - ) - raise MFDataException( - self.structure.get_model(), - self.structure.get_package(), - self._path, - "converting data", - self.structure.name, - inspect.stack()[0][3], - type_, - value_, - traceback_, - comment, - self._simulation_data.debug, - ex, - ) + if data is None: + converted_data = data + else: + data_struct = self.structure.data_item_structures[0] + try: + converted_data = convert_data( + data, self._data_dimensions, self._data_type, data_struct + ) + except Exception as ex: + type_, value_, traceback_ = sys.exc_info() + comment = ( + f'Could not convert data "{data}" to type ' + f'"{self._data_type}".' + ) + raise MFDataException( + self.structure.get_model(), + self.structure.get_package(), + self._path, + "converting data", + self.structure.name, + inspect.stack()[0][3], + type_, + value_, + traceback_, + comment, + self._simulation_data.debug, + ex, + ) try: storage.set_data(converted_data, key=self._current_key) except Exception as ex: diff --git a/flopy/mf6/data/mfdatastorage.py b/flopy/mf6/data/mfdatastorage.py index b235e38ad0..491c2e20cf 100644 --- a/flopy/mf6/data/mfdatastorage.py +++ b/flopy/mf6/data/mfdatastorage.py @@ -1569,25 +1569,35 @@ def _build_recarray(self, data, key, autofill): self._verify_list(new_data) return new_data + def _get_cellid_size(self, data_item_name): + model_num = DatumUtil.cellid_model_num( + data_item_name, + self.data_dimensions.structure.model_data, + self.data_dimensions.package_dim.model_dim, + ) + model_grid = self.data_dimensions.get_model_grid(model_num=model_num) + return model_grid.get_num_spatial_coordinates() + def make_tuple_cellids(self, data): # convert cellids from individual layer, row, column fields into # tuples (layer, row, column) - data_dim = self.data_dimensions - model_grid = data_dim.get_model_grid() - cellid_size = model_grid.get_num_spatial_coordinates() - new_data = [] current_cellid = () for line in data: + data_idx = 0 new_line = [] for item, is_cellid in zip(line, self.recarray_cellid_list_ex): if is_cellid: + cellid_size = self._get_cellid_size( + self._recarray_type_list[data_idx][0], + ) current_cellid += (item,) if len(current_cellid) == cellid_size: new_line.append(current_cellid) current_cellid = () else: new_line.append(item) + data_idx += 1 new_data.append(tuple(new_line)) return new_data @@ -2092,14 +2102,14 @@ def resolve_shape_list( return False, True return False, False - def _validate_cellid(self, arr_line, data_index): + def _validate_cellid(self, arr_line, data_index, data_item): if not self.data_dimensions.structure.model_data: # not model data so this is not a cell id return False if arr_line is None: return False + cellid_size = self._get_cellid_size(data_item.name) model_grid = self.data_dimensions.get_model_grid() - cellid_size = model_grid.get_num_spatial_coordinates() if cellid_size + data_index > len(arr_line): return False for index, dim_size in zip( @@ -2353,9 +2363,8 @@ def _verify_list(self, data): # this is a cell id. verify that it contains the # correct number of integers if cellid_size is None: - model_grid = datadim.get_model_grid() - cellid_size = ( - model_grid.get_num_spatial_coordinates() + cellid_size = self._get_cellid_size( + self._recarray_type_list[index][0] ) if ( cellid_size != 1 @@ -2934,9 +2943,7 @@ def build_type_list( ): # A cellid is a single entry (tuple) in the # recarray. Adjust dimensions accordingly. - data_dim = self.data_dimensions - grid = data_dim.get_model_grid() - size = grid.get_num_spatial_coordinates() + size = self._get_cellid_size(data_item.name) data_item.remove_cellid(resolved_shape, size) if not data_item.optional or not min_size: for index in range(0, resolved_shape[0]): @@ -2973,8 +2980,7 @@ def _append_type_lists(self, name, data_type, iscellid): if iscellid and self._model_or_sim.model_type is not None: # write each part of the cellid out as a separate entry # to _recarray_list_list_ex - model_grid = self.data_dimensions.get_model_grid() - cellid_size = model_grid.get_num_spatial_coordinates() + cellid_size = self._get_cellid_size(name) # determine header for different grid types if cellid_size == 1: self._do_ex_list_append(name, int, iscellid) diff --git a/flopy/mf6/data/mfdatautil.py b/flopy/mf6/data/mfdatautil.py index 390903ebeb..00b4203ef8 100644 --- a/flopy/mf6/data/mfdatautil.py +++ b/flopy/mf6/data/mfdatautil.py @@ -103,34 +103,33 @@ def convert_data(data, data_dimensions, data_type, data_item=None, sub_amt=1): False, ) elif data_type == DatumType.integer: - if data is not None: - if data_item is not None and data_item.numeric_index: - return int(PyListUtil.clean_numeric(data)) - sub_amt + if data_item is not None and data_item.numeric_index: + return int(PyListUtil.clean_numeric(data)) - sub_amt + try: + return int(data) + except (ValueError, TypeError): try: - return int(data) + return int(PyListUtil.clean_numeric(data)) except (ValueError, TypeError): - try: - return int(PyListUtil.clean_numeric(data)) - except (ValueError, TypeError): - message = ( - 'Data "{}" with value "{}" can not be ' - "converted to int" - ".".format(data_dimensions.structure.name, data) - ) - type_, value_, traceback_ = sys.exc_info() - raise MFDataException( - data_dimensions.structure.get_model(), - data_dimensions.structure.get_package(), - data_dimensions.structure.path, - "converting data", - data_dimensions.structure.name, - inspect.stack()[0][3], - type_, - value_, - traceback_, - message, - False, - ) + message = ( + 'Data "{}" with value "{}" can not be ' + "converted to int" + ".".format(data_dimensions.structure.name, data) + ) + type_, value_, traceback_ = sys.exc_info() + raise MFDataException( + data_dimensions.structure.get_model(), + data_dimensions.structure.get_package(), + data_dimensions.structure.path, + "converting data", + data_dimensions.structure.name, + inspect.stack()[0][3], + type_, + value_, + traceback_, + message, + False, + ) elif data_type == DatumType.string and data is not None: if data_item is None or not data_item.preserve_case: # keep strings lower case @@ -187,7 +186,12 @@ def to_string( and is_cellid and data_dim.get_model_dim(None).model_name is not None ): - model_grid = data_dim.get_model_grid() + model_num = DatumUtil.cellid_model_num( + data_item.name, + data_dim.structure.model_data, + data_dim.package_dim.model_dim, + ) + model_grid = data_dim.get_model_grid(model_num=model_num) cellid_size = model_grid.get_num_spatial_coordinates() if len(val) != cellid_size: message = ( diff --git a/flopy/mf6/data/mffileaccess.py b/flopy/mf6/data/mffileaccess.py index 5a2238aca0..94efd59535 100644 --- a/flopy/mf6/data/mffileaccess.py +++ b/flopy/mf6/data/mffileaccess.py @@ -2071,7 +2071,7 @@ def _append_data_list( self._last_line_info.append([]) if data_item.is_cellid or ( data_item.possible_cellid - and storage._validate_cellid(arr_line, data_index) + and storage._validate_cellid(arr_line, data_index, data_item) ): if self._data_dimensions is None: comment = ( @@ -2096,8 +2096,16 @@ def _append_data_list( comment, self._simulation_data.debug, ) + # in case of multiple model grids, determine which one to use + model_num = DatumUtil.cellid_model_num( + data_item.name, + struct.model_data, + self._data_dimensions.package_dim.model_dim, + ) + model_grid = self._data_dimensions.get_model_grid( + model_num=model_num + ) # read in the entire cellid - model_grid = self._data_dimensions.get_model_grid() cellid_size = model_grid.get_num_spatial_coordinates() cellid_tuple = () if ( diff --git a/flopy/mf6/mfsimbase.py b/flopy/mf6/mfsimbase.py index dc2da65c81..cb9b2d7e5f 100644 --- a/flopy/mf6/mfsimbase.py +++ b/flopy/mf6/mfsimbase.py @@ -906,6 +906,7 @@ def load( f" loading exchange package {exchange_file._get_pname()}..." ) exchange_file.load(strict) + instance._add_package(exchange_file, exchange_file.path) instance._exchange_files[exgfile[1]] = exchange_file # load simulation packages diff --git a/flopy/mf6/utils/testutils.py b/flopy/mf6/utils/testutils.py index 9e1e8676cb..eeb66bf17d 100644 --- a/flopy/mf6/utils/testutils.py +++ b/flopy/mf6/utils/testutils.py @@ -34,26 +34,27 @@ def read_cell2d(cell2d_file): return c2drecarray -def read_exchangedata(gwf_file, cellid_size=3): +def read_exchangedata(gwf_file, cellid_size=3, cellid_size_2=3): exgrecarray = [] fd = open(gwf_file, "r") for line in fd: linesp = line.strip().split() + cellid_tot = cellid_size + cellid_size_2 exgrecarray.append( ( make_int_tuple(linesp[0:cellid_size]), - make_int_tuple(linesp[cellid_size : cellid_size * 2]), - int(linesp[cellid_size * 2]), - float(linesp[cellid_size * 2 + 1]), - float(linesp[cellid_size * 2 + 2]), - float(linesp[cellid_size * 2 + 3]), - float(linesp[cellid_size * 2 + 4]), + make_int_tuple(linesp[cellid_size:cellid_tot]), + int(linesp[cellid_tot]), + float(linesp[cellid_tot + 1]), + float(linesp[cellid_tot + 2]), + float(linesp[cellid_tot + 3]), + float(linesp[cellid_tot + 4]), ) ) return exgrecarray -def read_gncrecarray(gnc_file, cellid_size=3): +def read_gncrecarray(gnc_file, cellid_size=3, cellid_size_2=3): gncrecarray = [] fd = open(gnc_file, "r") for line in fd: @@ -61,9 +62,17 @@ def read_gncrecarray(gnc_file, cellid_size=3): gncrecarray.append( ( make_int_tuple(linesp[0:cellid_size]), - make_int_tuple(linesp[cellid_size : cellid_size * 2]), - make_int_tuple(linesp[cellid_size * 2 : cellid_size * 3]), - float(linesp[cellid_size * 3]), + make_int_tuple( + linesp[cellid_size : cellid_size + cellid_size_2] + ), + make_int_tuple( + linesp[ + cellid_size + + cellid_size_2 : cellid_size * 2 + + cellid_size_2 + ] + ), + float(linesp[cellid_size * 2 + cellid_size_2]), ) ) return gncrecarray diff --git a/flopy/utils/datautil.py b/flopy/utils/datautil.py index f0df79f1a6..53844ae46b 100644 --- a/flopy/utils/datautil.py +++ b/flopy/utils/datautil.py @@ -83,6 +83,26 @@ def is_basic_type(obj): return True return False + @staticmethod + def cellid_model_num(data_item_name, model_data, model_dim): + # determine which model to use based on cellid name + # contains hard coded relationship between data item names and + # model number + # TODO: Incorporate this into the DFNs + if model_data: + return None + if data_item_name.startswith("cellidm") and len(data_item_name) > 7: + model_num = data_item_name[7:] + if DatumUtil.is_int(model_num): + return int(model_num) - 1 + if ( + data_item_name == "cellidn" or data_item_name == "cellidsj" + ) and len(model_dim) > 0: + return 0 + elif data_item_name == "cellidm" and len(model_dim) > 1: + return 1 + return None + class PyListUtil: """ From 90f87f2932e17955dee68ec53347c9d06178f94a Mon Sep 17 00:00:00 2001 From: w-bonelli Date: Wed, 12 Jul 2023 14:46:56 -0400 Subject: [PATCH 007/101] fix(binaryfile/gridutil): avoid numpy deprecation warnings (#1868) * use ndarray.item() to retrieve single elements * update gridutil get_disu_kwargs() docstrings --- flopy/utils/binaryfile.py | 12 ++++++------ flopy/utils/gridutil.py | 32 ++++++++++++++++++++++++++++---- 2 files changed, 34 insertions(+), 10 deletions(-) diff --git a/flopy/utils/binaryfile.py b/flopy/utils/binaryfile.py index cb5b585afd..8ed15bf54b 100644 --- a/flopy/utils/binaryfile.py +++ b/flopy/utils/binaryfile.py @@ -562,7 +562,7 @@ def get_ts(self, idx): ) # change ilay from header to zero-based if ilay != k: continue - ipos = int(self.iposarray[irec]) + ipos = self.iposarray[irec].item() # Calculate offset necessary to reach intended cell self.file.seek(ipos + int(ioffset), 0) @@ -1031,9 +1031,9 @@ def _build_index(self): print(f"{itxt}: {s}") print("file position: ", ipos) if ( - int(header["imeth"]) != 5 - and int(header["imeth"]) != 6 - and int(header["imeth"]) != 7 + header["imeth"].item() != 5 + and header["imeth"].item() != 6 + and header["imeth"].item() != 7 ): print("") @@ -1141,7 +1141,7 @@ def _get_header(self): ) for name in temp.dtype.names: header2[name] = temp[name] - if int(header2["imeth"]) == 6: + if header2["imeth"].item() == 6: header2["modelnam"] = binaryread(self.file, str, charlen=16) header2["paknam"] = binaryread(self.file, str, charlen=16) header2["modelnam2"] = binaryread(self.file, str, charlen=16) @@ -1689,7 +1689,7 @@ def get_record(self, idx, full3D=False): idx = np.array([idx]) header = self.recordarray[idx] - ipos = int(self.iposarray[idx]) + ipos = self.iposarray[idx].item() self.file.seek(ipos, 0) imeth = header["imeth"][0] diff --git a/flopy/utils/gridutil.py b/flopy/utils/gridutil.py index 953ee97a68..d9ca28c190 100644 --- a/flopy/utils/gridutil.py +++ b/flopy/utils/gridutil.py @@ -58,10 +58,34 @@ def get_lni(ncpl, nodes) -> List[Tuple[int, int]]: return tuples -def get_disu_kwargs(nlay, nrow, ncol, delr, delc, tp, botm): +def get_disu_kwargs( + nlay, + nrow, + ncol, + delr, + delc, + tp, + botm, +): """ - Simple utility for creating args needed to construct - a disu package + Create args needed to construct a DISU package. + + Parameters + ---------- + nlay : int + Number of layers + nrow : int + Number of rows + ncol : int + Number of columns + delr : numpy.ndarray + Column spacing along a row + delc : numpy.ndarray + Row spacing along a column + tp : int or numpy.ndarray + Top elevation(s) of cells in the model's top layer + botm : numpy.ndarray + Bottom elevation(s) of all cells in the model """ def get_nn(k, i, j): @@ -88,7 +112,7 @@ def get_nn(k, i, j): cl12.append(n + 1) hwva.append(n + 1) if k == 0: - top[n] = tp + top[n] = tp.item() if isinstance(tp, np.ndarray) else tp else: top[n] = botm[k - 1] bot[n] = botm[k] From e2efcfcf05cdf4816243bbbcee13e67dd4cdf26a Mon Sep 17 00:00:00 2001 From: w-bonelli Date: Thu, 13 Jul 2023 23:30:06 -0400 Subject: [PATCH 008/101] chore: add .vscode to .gitignore, update version to 3.5.0.dev0 (#1875) --- .gitignore | 3 +++ CITATION.cff | 4 ++-- README.md | 4 ++-- code.json | 4 ++-- docs/PyPI_release.md | 2 +- flopy/version.py | 4 ++-- version.txt | 2 +- 7 files changed, 13 insertions(+), 10 deletions(-) diff --git a/.gitignore b/.gitignore index 5576ef0b46..3f1996eb74 100644 --- a/.gitignore +++ b/.gitignore @@ -95,3 +95,6 @@ autotest/.noseids **.env venv app + +# IDE files +.vscode \ No newline at end of file diff --git a/CITATION.cff b/CITATION.cff index d32c914689..f911a18b68 100644 --- a/CITATION.cff +++ b/CITATION.cff @@ -3,8 +3,8 @@ message: If you use this software, please cite both the article from preferred-c and the software itself. type: software title: FloPy -version: 3.4.2.dev0 -date-released: '2023-06-30' +version: 3.5.0.dev0 +date-released: '2023-07-13' doi: 10.5066/F7BK19FH abstract: A Python package to create, run, and post-process MODFLOW-based models. repository-artifact: https://pypi.org/project/flopy diff --git a/README.md b/README.md index 233177e147..616afcf984 100644 --- a/README.md +++ b/README.md @@ -1,7 +1,7 @@ flopy3 -### Version 3.4.2.dev0 (preliminary) +### Version 3.5.0.dev0 (preliminary) [![flopy continuous integration](https://github.com/modflowpy/flopy/actions/workflows/commit.yml/badge.svg?branch=develop)](https://github.com/modflowpy/flopy/actions/workflows/commit.yml) [![Read the Docs](https://github.com/modflowpy/flopy/actions/workflows/rtd.yml/badge.svg?branch=develop)](https://github.com/modflowpy/flopy/actions/workflows/rtd.yml) @@ -142,7 +142,7 @@ How to Cite ##### ***Software/Code citation for FloPy:*** -[Bakker, Mark, Post, Vincent, Hughes, J. D., Langevin, C. D., White, J. T., Leaf, A. T., Paulinski, S. R., Bellino, J. C., Morway, E. D., Toews, M. W., Larsen, J. D., Fienen, M. N., Starn, J. J., Brakenhoff, D. A., and Bonelli, W. P., 2023, FloPy v3.4.2.dev0 (preliminary): U.S. Geological Survey Software Release, 30 June 2023, https://doi.org/10.5066/F7BK19FH](https://doi.org/10.5066/F7BK19FH) +[Bakker, Mark, Post, Vincent, Hughes, J. D., Langevin, C. D., White, J. T., Leaf, A. T., Paulinski, S. R., Bellino, J. C., Morway, E. D., Toews, M. W., Larsen, J. D., Fienen, M. N., Starn, J. J., Brakenhoff, D. A., and Bonelli, W. P., 2023, FloPy v3.5.0.dev0 (preliminary): U.S. Geological Survey Software Release, 13 July 2023, https://doi.org/10.5066/F7BK19FH](https://doi.org/10.5066/F7BK19FH) Additional FloPy Related Publications diff --git a/code.json b/code.json index ca24c1e542..d802f08bac 100644 --- a/code.json +++ b/code.json @@ -29,9 +29,9 @@ "downloadURL": "https://code.usgs.gov/usgs/modflow/flopy/archive/master.zip", "vcs": "git", "laborHours": -1, - "version": "3.4.2.dev0", + "version": "3.5.0.dev0", "date": { - "metadataLastUpdated": "2023-06-30" + "metadataLastUpdated": "2023-07-13" }, "organization": "U.S. Geological Survey", "permissions": { diff --git a/docs/PyPI_release.md b/docs/PyPI_release.md index 9ea6c3264e..305284e8b6 100644 --- a/docs/PyPI_release.md +++ b/docs/PyPI_release.md @@ -30,7 +30,7 @@ How to Cite *Software/Code citation for FloPy:* -[Bakker, Mark, Post, Vincent, Hughes, J. D., Langevin, C. D., White, J. T., Leaf, A. T., Paulinski, S. R., Bellino, J. C., Morway, E. D., Toews, M. W., Larsen, J. D., Fienen, M. N., Starn, J. J., Brakenhoff, D. A., and Bonelli, W. P., 2023, FloPy v3.4.2.dev0 (preliminary): U.S. Geological Survey Software Release, 30 June 2023, https://doi.org/10.5066/F7BK19FH](https://doi.org/10.5066/F7BK19FH) +[Bakker, Mark, Post, Vincent, Hughes, J. D., Langevin, C. D., White, J. T., Leaf, A. T., Paulinski, S. R., Bellino, J. C., Morway, E. D., Toews, M. W., Larsen, J. D., Fienen, M. N., Starn, J. J., Brakenhoff, D. A., and Bonelli, W. P., 2023, FloPy v3.5.0.dev0 (preliminary): U.S. Geological Survey Software Release, 13 July 2023, https://doi.org/10.5066/F7BK19FH](https://doi.org/10.5066/F7BK19FH) Disclaimer diff --git a/flopy/version.py b/flopy/version.py index 23af23a369..0d06ec8c9a 100644 --- a/flopy/version.py +++ b/flopy/version.py @@ -1,3 +1,3 @@ -# flopy version file automatically created using update_version.py on June 30, 2023 08:29:09 +# flopy version file automatically created using update_version.py on July 13, 2023 18:32:27 -__version__ = "3.4.2.dev0" +__version__ = "3.5.0.dev0" diff --git a/version.txt b/version.txt index 7387b31eaa..33253064e4 100644 --- a/version.txt +++ b/version.txt @@ -1 +1 @@ -3.4.2.dev0 \ No newline at end of file +3.5.0.dev0 \ No newline at end of file From 9168b3a538a5f604010ad5df17ea7e868b8f913f Mon Sep 17 00:00:00 2001 From: Joshua Larsen Date: Fri, 14 Jul 2023 14:03:44 -0700 Subject: [PATCH 009/101] update(_set_neighbors): check for closed iverts and remove closing ivert (#1876) --- flopy/discretization/grid.py | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/flopy/discretization/grid.py b/flopy/discretization/grid.py index 428873846d..5da4c9aa6c 100644 --- a/flopy/discretization/grid.py +++ b/flopy/discretization/grid.py @@ -560,6 +560,8 @@ def _set_neighbors(self, reset=False, method="rook"): node_nums = [] if method == "rook": for poly in self.iverts: + if poly[0] == poly[-1]: + poly = poly[:-1] for v in range(len(poly)): geoms.append(tuple(sorted([poly[v - 1], poly[v]]))) node_nums += [node_num] * len(poly) @@ -567,6 +569,8 @@ def _set_neighbors(self, reset=False, method="rook"): else: # queen neighbors for poly in self.iverts: + if poly[0] == poly[-1]: + poly = poly[:-1] for vert in poly: geoms.append(vert) node_nums += [node_num] * len(poly) From ed549798333cf2186516d55ffb1e9d9a537a4a0f Mon Sep 17 00:00:00 2001 From: jdhughes-usgs Date: Sun, 16 Jul 2023 18:19:11 -0500 Subject: [PATCH 010/101] fix(binary): fix binary header information (#1877) * modify pip upgrade in rtd workflow Replaces PR #1848 (Close #1848) --- .github/workflows/rtd.yml | 2 +- autotest/test_mf6.py | 187 +++++++++++++++++++++++++-------- flopy/mf6/data/mffileaccess.py | 51 ++++++--- flopy/utils/binaryfile.py | 1 + flopy/utils/datafile.py | 12 +-- 5 files changed, 187 insertions(+), 66 deletions(-) diff --git a/.github/workflows/rtd.yml b/.github/workflows/rtd.yml index dddcae5fcc..5df817f687 100644 --- a/.github/workflows/rtd.yml +++ b/.github/workflows/rtd.yml @@ -46,7 +46,7 @@ jobs: cache-dependency-path: pyproject.toml - name: Upgrade pip - run: pip install --upgrade pip + run: python -m pip install --upgrade pip - name: Install flopy and dependencies run: pip install ".[test, doc, optional]" diff --git a/autotest/test_mf6.py b/autotest/test_mf6.py index f5c40834b4..3a35108f2c 100644 --- a/autotest/test_mf6.py +++ b/autotest/test_mf6.py @@ -503,40 +503,88 @@ def test_subdir(function_tmpdir): @requires_exe("mf6") -def test_binary_write(function_tmpdir): +@pytest.mark.parametrize("layered", [True, False]) +def test_binary_write(function_tmpdir, layered): nlay, nrow, ncol = 2, 1, 10 shape2d = (nrow, ncol) - shape3d = (nlay, nrow, ncol) + + # data for layers + botm = [4.0, 0.0] + strt = [5.0, 10.0] # create binary data structured + if layered: + idomain_data = [] + botm_data = [] + strt_data = [] + for k in range(nlay): + idomain_data.append( + { + "factor": 1.0, + "filename": f"idomain_l{k+1}.bin", + "data": 1, + "binary": True, + "iprn": 1, + } + ) + botm_data.append( + { + "filename": f"botm_l{k+1}.bin", + "binary": True, + "iprn": 1, + "data": np.full(shape2d, botm[k], dtype=float), + } + ) + strt_data.append( + { + "filename": f"strt_l{k+1}.bin", + "binary": True, + "iprn": 1, + "data": np.full(shape2d, strt[k], dtype=float), + } + ) + else: + idomain_data = { + "filename": "idomain.bin", + "binary": True, + "iprn": 1, + "data": 1, + } + botm_data = { + "filename": "botm.bin", + "binary": True, + "iprn": 1, + "data": np.array( + [ + np.full(shape2d, botm[0], dtype=float), + np.full(shape2d, botm[1], dtype=float), + ] + ), + } + strt_data = { + "filename": "strt.bin", + "binary": True, + "iprn": 1, + "data": np.array( + [ + np.full(shape2d, strt[0], dtype=float), + np.full(shape2d, strt[1], dtype=float), + ] + ), + } + + # binary data that does not vary by layers top_data = { "filename": "top.bin", "binary": True, - "iprn": 0, + "iprn": 1, "data": 10.0, } - botm_data = { - "filename": "botm.bin", - "binary": True, - "iprn": 0, - "data": np.array( - [ - np.full(shape2d, 4.0, dtype=float), - np.full(shape2d, 0.0, dtype=float), - ] - ), - } - strt_data = { - "filename": "strt.bin", - "binary": True, - "iprn": 0, - "data": np.full(shape3d, 10.0, dtype=float), - } rch_data = { 0: { "filename": "recharge.bin", "binary": True, - "iprn": 0, + "iprn": 1, "data": 0.000001, }, } @@ -548,7 +596,7 @@ def test_binary_write(function_tmpdir): 0: { "filename": "chd.bin", "binary": True, - "iprn": 0, + "iprn": 1, "data": chd_data, }, } @@ -566,6 +614,7 @@ def test_binary_write(function_tmpdir): delc=1.0, top=top_data, botm=botm_data, + idomain=idomain_data, ) ModflowGwfnpf( gwf, @@ -573,7 +622,7 @@ def test_binary_write(function_tmpdir): ) ModflowGwfic( gwf, - strt=10.0, + strt=strt_data, ) ModflowGwfchd( gwf, @@ -588,8 +637,8 @@ def test_binary_write(function_tmpdir): @requires_exe("mf6") -@pytest.mark.skip(reason="todo:: after flopy binary fix.") -def test_vor_binary_write(function_tmpdir): +@pytest.mark.parametrize("layered", [True, False]) +def test_vor_binary_write(function_tmpdir, layered): # build voronoi grid boundary = [(0.0, 0.0), (0.0, 1.0), (10.0, 1.0), (10.0, 0.0)] triangle_ws = function_tmpdir / "triangle" @@ -606,37 +655,84 @@ def test_vor_binary_write(function_tmpdir): # problem dimensions nlay = 2 - shape3d = (nlay, vor.ncpl) + + # data for layers + botm = [4.0, 0.0] + strt = [5.0, 10.0] # build binary data + if layered: + idomain_data = [] + botm_data = [] + strt_data = [] + for k in range(nlay): + idomain_data.append( + { + "factor": 1.0, + "filename": f"idomain_l{k + 1}.bin", + "data": 1, + "binary": True, + "iprn": 1, + } + ) + botm_data.append( + { + "filename": f"botm_l{k + 1}.bin", + "binary": True, + "iprn": 1, + "data": np.full(vor.ncpl, botm[k], dtype=float), + } + ) + strt_data.append( + { + "filename": f"strt_l{k + 1}.bin", + "binary": True, + "iprn": 1, + "data": np.full(vor.ncpl, strt[k], dtype=float), + } + ) + else: + idomain_data = { + "filename": "idomain.bin", + "binary": True, + "iprn": 1, + "data": 1, + } + botm_data = { + "filename": "botm.bin", + "binary": True, + "iprn": 1, + "data": np.array( + [ + np.full(vor.ncpl, botm[0], dtype=float), + np.full(vor.ncpl, botm[1], dtype=float), + ] + ), + } + strt_data = { + "filename": "strt.bin", + "binary": True, + "iprn": 1, + "data": np.array( + [ + np.full(vor.ncpl, strt[0], dtype=float), + np.full(vor.ncpl, strt[1], dtype=float), + ] + ), + } + + # binary data that does not vary by layers top_data = { "filename": "top.bin", "binary": True, - "iprn": 0, + "iprn": 1, "data": 10.0, } - botm_data = { - "filename": "botm.bin", - "binary": True, - "iprn": 0, - "data": np.array( - [ - np.full(vor.ncpl, 4.0, dtype=float), - np.full(vor.ncpl, 0.0, dtype=float), - ] - ), - } - strt_data = { - "filename": "strt.bin", - "binary": True, - "iprn": 0, - "data": np.full(shape3d, 10.0, dtype=float), - } rch_data = { 0: { "filename": "recharge.bin", "binary": True, - "iprn": 0, + "iprn": 1, "data": np.full(vor.ncpl, 0.000001, dtype=float), # 0.000001, }, } @@ -668,6 +764,7 @@ def test_vor_binary_write(function_tmpdir): cell2d=vor.get_disv_gridprops()["cell2d"], top=top_data, botm=botm_data, + idomain=idomain_data, xorigin=0.0, yorigin=0.0, ) diff --git a/flopy/mf6/data/mffileaccess.py b/flopy/mf6/data/mffileaccess.py index 94efd59535..653f54a29b 100644 --- a/flopy/mf6/data/mffileaccess.py +++ b/flopy/mf6/data/mffileaccess.py @@ -212,6 +212,8 @@ def write_binary_file( ): data = self._resolve_cellid_numbers_to_file(data) fd = self._open_ext_file(fname, binary=True, write=True) + if data.size == modelgrid.nnodes: + write_multi_layer = False if write_multi_layer: for layer, value in enumerate(data): self._write_layer( @@ -252,7 +254,14 @@ def _write_layer( ilay=None, ): header_data = self._get_header( - modelgrid, modeltime, stress_period, precision, text, fname, ilay + modelgrid, + modeltime, + stress_period, + precision, + text, + fname, + ilay=ilay, + data=data, ) header_data.tofile(fd) data.tofile(fd) @@ -266,6 +275,7 @@ def _get_header( text, fname, ilay=None, + data=None, ): # handle dis (row, col, lay), disv (ncpl, lay), and disu (nodes) cases if modelgrid is not None and modeltime is not None: @@ -274,13 +284,18 @@ def _get_header( if ilay is None: ilay = modelgrid.nlay if modelgrid.grid_type == "structured": + m1, m2, m3 = modelgrid.ncol, modelgrid.nrow, ilay + if data is not None: + shape3d = modelgrid.nlay * modelgrid.nrow * modelgrid.ncol + if data.size == shape3d: + m1, m2, m3 = shape3d, 1, 1 return BinaryHeader.create( bintype="vardis", precision=precision, text=text, - nrow=modelgrid.nrow, - ncol=modelgrid.ncol, - ilay=ilay, + m1=m1, + m2=m2, + m3=m3, pertim=pertim, totim=totim, kstp=1, @@ -289,24 +304,30 @@ def _get_header( elif modelgrid.grid_type == "vertex": if ilay is None: ilay = modelgrid.nlay + m1, m2, m3 = modelgrid.ncpl, 1, ilay + if data is not None: + shape3d = modelgrid.nlay * modelgrid.ncpl + if data.size == shape3d: + m1, m2, m3 = shape3d, 1, 1 return BinaryHeader.create( bintype="vardisv", precision=precision, text=text, - ncpl=modelgrid.ncpl, - ilay=ilay, - m3=1, + m1=m1, + m2=m2, + m3=m3, pertim=pertim, totim=totim, kstp=1, kper=stress_period, ) elif modelgrid.grid_type == "unstructured": + m1, m2, m3 = modelgrid.nnodes, 1, 1 return BinaryHeader.create( bintype="vardisu", precision=precision, text=text, - nodes=modelgrid.nnodes, + m1=m1, m2=1, m3=1, pertim=pertim, @@ -317,13 +338,14 @@ def _get_header( else: if ilay is None: ilay = 1 + m1, m2, m3 = 1, 1, ilay header = BinaryHeader.create( bintype="vardis", precision=precision, text=text, - nrow=1, - ncol=1, - ilay=ilay, + m1=m1, + m2=m2, + m3=m3, pertim=pertim, totim=totim, kstp=1, @@ -339,14 +361,15 @@ def _get_header( "binary file {}.".format(fname) ) else: + m1, m2, m3 = 1, 1, 1 pertim = np.float64(1.0) header = BinaryHeader.create( bintype="vardis", precision=precision, text=text, - nrow=1, - ncol=1, - ilay=1, + m1=m1, + m2=m2, + m3=m3, pertim=pertim, totim=pertim, kstp=1, diff --git a/flopy/utils/binaryfile.py b/flopy/utils/binaryfile.py index 8ed15bf54b..db7ffaf9fb 100644 --- a/flopy/utils/binaryfile.py +++ b/flopy/utils/binaryfile.py @@ -194,6 +194,7 @@ def set_values(self, **kwargs): "ilay", "ncpl", "nodes", + "m1", "m2", "m3", ] diff --git a/flopy/utils/datafile.py b/flopy/utils/datafile.py index 73a7f61b66..87fb4d9d98 100644 --- a/flopy/utils/datafile.py +++ b/flopy/utils/datafile.py @@ -81,9 +81,9 @@ def __init__(self, filetype=None, precision="single"): ("pertim", floattype), ("totim", floattype), ("text", "a16"), - ("ncol", "i4"), - ("nrow", "i4"), - ("ilay", "i4"), + ("m1", "i4"), + ("m2", "i4"), + ("m3", "i4"), ] ) elif self.header_type == "vardisv": @@ -94,8 +94,8 @@ def __init__(self, filetype=None, precision="single"): ("pertim", floattype), ("totim", floattype), ("text", "a16"), - ("ncpl", "i4"), - ("ilay", "i4"), + ("m1", "i4"), + ("m2", "i4"), ("m3", "i4"), ] ) @@ -107,7 +107,7 @@ def __init__(self, filetype=None, precision="single"): ("pertim", floattype), ("totim", floattype), ("text", "a16"), - ("nodes", "i4"), + ("m1", "i4"), ("m2", "i4"), ("m3", "i4"), ] From 64cd26c2363bc3cd7c2a24b69387952e16a03324 Mon Sep 17 00:00:00 2001 From: w-bonelli Date: Mon, 17 Jul 2023 22:57:25 -0400 Subject: [PATCH 011/101] docs(release): add note to review deprecations (#1878) --- docs/make_release.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/docs/make_release.md b/docs/make_release.md index b057a5e816..c8bcab8889 100644 --- a/docs/make_release.md +++ b/docs/make_release.md @@ -28,6 +28,12 @@ The FloPy release procedure is mostly automated with GitHub Actions in [`release 3. Update the authors in `CITATION.cff` for the Software/Code citation for FloPy, if required. +4. Review deprecations. To search for deprecation warnings with git: `git grep [-[A/B/C]N] ` for N optional extra lines of context. Some terms to search for: + + - deprecated:: + - DEPRECATED + - DeprecationWarning + ## Release procedure From 02fae7d8084f52e852722338038b6ef7a2691e61 Mon Sep 17 00:00:00 2001 From: Mike Taves Date: Wed, 19 Jul 2023 16:16:05 +1200 Subject: [PATCH 012/101] refactor(crs): provide support without pyproj, other deprecations (#1850) * Adjust deprecation docs and messages, suggest to remove for FloPy 3.6 --- .docs/Notebooks/modelgrid_examples.py | 33 +-- .docs/Notebooks/shapefile_export_example.py | 10 +- autotest/test_export.py | 8 +- autotest/test_grid.py | 309 +++++++++++++++----- autotest/test_lgr.py | 10 +- autotest/test_modflow.py | 5 +- autotest/test_modflowdis.py | 4 +- autotest/test_mp6.py | 6 +- flopy/discretization/grid.py | 271 ++++++++++++++--- flopy/discretization/structuredgrid.py | 37 ++- flopy/discretization/unstructuredgrid.py | 37 ++- flopy/discretization/vertexgrid.py | 37 ++- flopy/export/shapefile_utils.py | 104 +++++-- flopy/export/utils.py | 13 +- flopy/mf6/mfmodel.py | 4 - flopy/modflow/mf.py | 18 +- flopy/modflow/mfdis.py | 52 ++-- flopy/utils/crs.py | 91 +++--- flopy/utils/mfreadnam.py | 8 +- flopy/utils/modpathfile.py | 81 +++-- 20 files changed, 789 insertions(+), 349 deletions(-) diff --git a/.docs/Notebooks/modelgrid_examples.py b/.docs/Notebooks/modelgrid_examples.py index 0c7b751a71..da0fc86580 100644 --- a/.docs/Notebooks/modelgrid_examples.py +++ b/.docs/Notebooks/modelgrid_examples.py @@ -163,7 +163,7 @@ # set reference infromation modelgrid1.set_coord_info( - xoff=xoff, yoff=yoff, angrot=angrot, epsg=epsg, proj4=proj4 + xoff=xoff, yoff=yoff, angrot=angrot, crs=epsg ) print("After: {}".format(modelgrid1)) @@ -217,9 +217,8 @@ # - `botm` : Array of layer Botm elevations # - `idomain` : An ibound or idomain array that specifies active and inactive cells # - `lenuni` : Model length unit integer -# - `epsg` : epsg code of model coordinate system -# - `proj4` : proj4 str describining model coordinate system -# - `prj` : path to ".prj" projection file that describes the model coordinate system +# - `crs` : either an epsg integer code of model coordinate system, a proj4 string or pyproj CRS instance +# - `prjfile` : path to ".prj" projection file that describes the model coordinate system # - `xoff` : x-coordinate of the lower-left corner of the modelgrid # - `yoff` : y-coordinate of the lower-left corner of the modelgrid # - `angrot` : model grid rotation @@ -363,9 +362,8 @@ # - `botm` : Array of layer Botm elevations # - `idomain` : An ibound or idomain array that specifies active and inactive cells # - `lenuni` : Model length unit integer -# - `epsg` : epsg code of model coordinate system -# - `proj4` : proj4 str describining model coordinate system -# - `prj` : path to ".prj" projection file that describes the model coordinate system +# - `crs` : either an epsg integer code of model coordinate system, a proj4 string or pyproj CRS instance +# - `prjfile` : path to ".prj" projection file that describes the model coordinate system # - `xoff` : x-coordinate of the lower-left corner of the modelgrid # - `yoff` : y-coordinate of the lower-left corner of the modelgrid # - `angrot` : model grid rotation @@ -482,9 +480,8 @@ def load_iverts(fname): # - `idomain` : An ibound or idomain array that specifies active and inactive cells # - `lenuni` : Model length unit integer # - `ncpl` : one dimensional array of number of cells per model layer -# - `epsg` : epsg code of model coordinate system -# - `proj4` : proj4 str describining model coordinate system -# - `prj` : path to ".prj" projection file that describes the model coordinate system +# - `crs` : either an epsg integer code of model coordinate system, a proj4 string or pyproj CRS instance +# - `prjfile` : path to ".prj" projection file that describes the model coordinate system # - `xoff` : x-coordinate of the lower-left corner of the modelgrid # - `yoff` : y-coordinate of the lower-left corner of the modelgrid # - `angrot` : model grid rotation @@ -585,7 +582,7 @@ def load_iverts(fname): xoff=622241.1904510253, yoff=3343617.741737109, angrot=15.0, - proj4="+proj=utm +zone=14 +ellps=WGS84 +datum=WGS84 +units=m +no_defs", + crs="+proj=utm +zone=14 +ellps=WGS84 +datum=WGS84 +units=m +no_defs", ) # - @@ -626,7 +623,7 @@ def load_iverts(fname): # - `angrot_radians` : returns the angle of rotation of the modelgrid in radians # - `epsg` : returns the modelgrid epsg code if it is set # - `proj4` : returns the modelgrid proj4 string if it is set -# - `prj` : returns the path to the modelgrid projection file if it is set +# - `prjfile` : returns the path to the modelgrid projection file if it is set # Access and print some of these properties print( @@ -729,8 +726,8 @@ def load_iverts(fname): # - `xoff` : lower-left corner of modelgrid x-coordinate location # - `yoff` : lower-left corner of modelgrid y-coordinate location # - `angrot` : rotation of model grid in degrees -# - `epsg` : epsg code for model grid projection -# - `proj4` : proj4 string describing the model grid projection +# - `crs` : either an epsg integer code of model coordinate system, a proj4 string or pyproj CRS instance +# - `prjfile` : path to ".prj" projection file that describes the model coordinate system # - `merge_coord_info` : boolean flag to either merge changes with the existing coordinate info or clear existing coordinate info before applying changes. # + @@ -906,15 +903,15 @@ def load_iverts(fname): # Method to write a shapefile of the grid with just the cellid attributes. Input parameters include: # # - `filename` : shapefile name -# - `epsg` : optional epsg code of the coordinate system projection -# - `prj` : optional, input projection file to be used to define the coordinate system projection +# - `crs` : either an epsg integer code of model coordinate system, a proj4 string or pyproj CRS instance +# - `prjfile` : path to ".prj" projection file that describes the model coordinate system # + # write a shapefile shp_name = os.path.join(gridgen_ws, "freyberg-6_grid.shp") -epsg = "32614" +epsg = 32614 -ml.modelgrid.write_shapefile(shp_name, epsg) +ml.modelgrid.write_shapefile(shp_name, crs=epsg) # - try: diff --git a/.docs/Notebooks/shapefile_export_example.py b/.docs/Notebooks/shapefile_export_example.py index 7dff344139..95377b71fd 100644 --- a/.docs/Notebooks/shapefile_export_example.py +++ b/.docs/Notebooks/shapefile_export_example.py @@ -64,7 +64,7 @@ # the coordinate information where the grid is located in a projected coordinate system (e.g. UTM) grid = m.modelgrid -grid.set_coord_info(xoff=273170, yoff=5088657, epsg=26916) +grid.set_coord_info(xoff=273170, yoff=5088657, crs=26916) grid.extent @@ -143,7 +143,7 @@ # ##### write the shapefile fname = "{}/bcs.shp".format(outdir) -recarray2shp(spd.to_records(), geoms=polygons, shpname=fname, epsg=grid.epsg) +recarray2shp(spd.to_records(), geoms=polygons, shpname=fname, crs=grid.epsg) ax = plt.subplot(1, 1, 1, aspect="equal") extents = grid.extent @@ -173,7 +173,7 @@ geoms = [Point(x, y) for x, y in zip(welldata.x_utm, welldata.y_utm)] fname = "{}/wel_data.shp".format(outdir) -recarray2shp(welldata.to_records(), geoms=geoms, shpname=fname, epsg=grid.epsg) +recarray2shp(welldata.to_records(), geoms=geoms, shpname=fname, crs=grid.epsg) # - ax = plt.subplot(1, 1, 1, aspect="equal") @@ -207,7 +207,7 @@ rivdata.to_records(index=False), geoms=lines, shpname=lines_shapefile, - epsg=grid.epsg, + crs=grid.epsg, ) # - @@ -242,7 +242,7 @@ linesdata.drop("geometry", axis=1).to_records(), geoms=linesdata.geometry.values, shpname=lines_shapefile, - epsg=grid.epsg, + crs=grid.epsg, ) ax = plt.subplot(1, 1, 1, aspect="equal") diff --git a/autotest/test_export.py b/autotest/test_export.py index 96f3503a28..a21c060c11 100644 --- a/autotest/test_export.py +++ b/autotest/test_export.py @@ -328,8 +328,12 @@ def test_write_gridlines_shapefile(function_tmpdir): outshp = function_tmpdir / "gridlines.shp" write_gridlines_shapefile(outshp, sg) - for suffix in [".dbf", ".prj", ".shp", ".shx"]: + for suffix in [".dbf", ".shp", ".shx"]: assert outshp.with_suffix(suffix).exists() + if has_pkg("pyproj"): + assert outshp.with_suffix(".prj").exists() + else: + assert not outshp.with_suffix(".prj").exists() with shapefile.Reader(str(outshp)) as sf: assert sf.shapeType == shapefile.POLYLINE @@ -991,7 +995,7 @@ def test_polygon_from_ij_with_epsg(function_tmpdir): fpth2 = os.path.join(ws, "26715.prj") shutil.copy(fpth, fpth2) fpth = os.path.join(ws, "test.shp") - recarray2shp(recarray, geoms, fpth, prj=fpth2) + recarray2shp(recarray, geoms, fpth, prjfile=fpth2) # test_dtypes fpth = os.path.join(ws, "test.shp") diff --git a/autotest/test_grid.py b/autotest/test_grid.py index 22b9724a80..246f9e48b9 100644 --- a/autotest/test_grid.py +++ b/autotest/test_grid.py @@ -1,4 +1,6 @@ +import re import os +import warnings from warnings import warn import matplotlib @@ -9,10 +11,9 @@ from flaky import flaky from matplotlib import pyplot as plt from modflow_devtools.markers import requires_exe, requires_pkg -from packaging import version +from modflow_devtools.misc import has_pkg from pytest_cases import parametrize_with_cases -import flopy from flopy.discretization import StructuredGrid, UnstructuredGrid, VertexGrid from flopy.mf6 import MFSimulation from flopy.modflow import Modflow, ModflowDis @@ -22,6 +23,11 @@ from flopy.utils.voronoi import VoronoiGrid +HAS_PYPROJ = has_pkg("pyproj") +if HAS_PYPROJ: + import pyproj + + @pytest.fixture def minimal_unstructured_grid_info(): d = { @@ -549,7 +555,6 @@ def test_unstructured_from_gridspec(example_data_path): assert min(grid.botm) == min([xyz[2] for xyz in expected_verts]) -@requires_pkg("pyproj") @pytest.mark.parametrize( "crs,expected_srs", ( @@ -566,43 +571,59 @@ def test_unstructured_from_gridspec(example_data_path): def test_grid_crs( minimal_unstructured_grid_info, crs, expected_srs, function_tmpdir ): - import pyproj + expected_epsg = None + if match := re.findall(r"epsg:([\d]+)", expected_srs or "", re.IGNORECASE): + expected_epsg = int(match[0]) + if not HAS_PYPROJ and isinstance(crs, str) and "epsg" not in crs.lower(): + # pyproj needed to derive 'epsg' from PROJ string + expected_epsg = None d = minimal_unstructured_grid_info delr = np.ones(10) delc = np.ones(10) - sg = StructuredGrid(delr=delr, delc=delc, crs=crs) - if crs is not None: - assert isinstance(sg.crs, pyproj.CRS) - assert sg.crs.srs == expected_srs - usg = UnstructuredGrid(**d, crs=crs) - assert getattr(sg.crs, "srs", None) == expected_srs + def do_checks(g): + if HAS_PYPROJ and crs is not None: + assert isinstance(g.crs, pyproj.CRS) + assert g.crs.srs == expected_srs + else: + assert g.crs is None + assert g.epsg == expected_epsg + + # test each grid type (class) + do_checks(StructuredGrid(delr=delr, delc=delc, crs=crs)) + do_checks(UnstructuredGrid(**d, crs=crs)) + do_checks(VertexGrid(vertices=d["vertices"], crs=crs)) - vg = VertexGrid(vertices=d["vertices"], crs=crs) - assert getattr(sg.crs, "srs", None) == expected_srs + # test deprecated 'epsg' parameter + if isinstance(crs, int): + with pytest.deprecated_call(): + do_checks(StructuredGrid(delr=delr, delc=delc, epsg=crs)) - # test input of pyproj.CRS object - if crs == 26916: - sg2 = StructuredGrid(delr=delr, delc=delc, crs=sg.crs) + if HAS_PYPROJ and crs == 26916: + crs_obj = get_authority_crs(crs) - if crs is not None: - assert isinstance(sg2.crs, pyproj.CRS) - assert getattr(sg2.crs, "srs", None) == expected_srs + # test input of pyproj.CRS object + do_checks(StructuredGrid(delr=delr, delc=delc, crs=crs_obj)) # test input of projection file prjfile = function_tmpdir / "grid_crs.prj" - with open(prjfile, "w", encoding="utf-8") as dest: - write_text = sg.crs.to_wkt() - dest.write(write_text) + prjfile.write_text(crs_obj.to_wkt(), encoding="utf-8") + + do_checks(StructuredGrid(delr=delr, delc=delc, prjfile=prjfile)) - sg3 = StructuredGrid(delr=delr, delc=delc, prjfile=prjfile) - if crs is not None: - assert isinstance(sg3.crs, pyproj.CRS) - assert getattr(sg3.crs, "srs", None) == expected_srs + # test deprecated 'prj' parameter + with pytest.deprecated_call(): + do_checks(StructuredGrid(delr=delr, delc=delc, prj=prjfile)) + + # test deprecated 'proj4' parameter + with warnings.catch_warnings(): + warnings.simplefilter("ignore") # pyproj warning about conversion + proj4 = crs_obj.to_proj4() + with pytest.deprecated_call(): + do_checks(StructuredGrid(delr=delr, delc=delc, proj4=proj4)) -@requires_pkg("pyproj") @pytest.mark.parametrize( "crs,expected_srs", ( @@ -618,82 +639,208 @@ def test_grid_crs( ), ) def test_grid_set_crs(crs, expected_srs, function_tmpdir): - import pyproj + expected_epsg = None + if match := re.findall(r"epsg:([\d]+)", expected_srs or "", re.IGNORECASE): + expected_epsg = int(match[0]) + if not HAS_PYPROJ and isinstance(crs, str) and "epsg" not in crs.lower(): + # pyproj needed to derive 'epsg' from PROJ string + expected_epsg = None + elif HAS_PYPROJ and crs is not None and expected_epsg is None: + expected_epsg = pyproj.CRS.from_user_input(crs).to_epsg() delr = np.ones(10) delc = np.ones(10) - sg = StructuredGrid(delr=delr, delc=delc) - sg.crs = crs - if crs is not None: - assert isinstance(sg.crs, pyproj.CRS) - assert getattr(sg.crs, "srs", None) == expected_srs - # test setting the crs via set_coord_info + def do_checks(g, *, exp_srs=expected_srs, exp_epsg=expected_epsg): + if HAS_PYPROJ: + if crs is not None: + assert isinstance(g.crs, pyproj.CRS) + assert getattr(g.crs, "srs", None) == exp_srs + else: + assert g.crs is None + assert g.epsg == exp_epsg + + # test set_coord_info with a grid object + sg = StructuredGrid(delr=delr, delc=delc, crs=crs) + do_checks(sg) + + # no change sg.set_coord_info() - if crs is not None: - assert isinstance(sg.crs, pyproj.CRS) - assert getattr(sg.crs, "srs", None) == expected_srs + do_checks(sg) + + # use 'crs' arg sg.set_coord_info(crs=crs) - if crs is not None: - assert isinstance(sg.crs, pyproj.CRS) - assert getattr(sg.crs, "srs", None) == expected_srs + do_checks(sg) + + # use different 'crs' sg.set_coord_info(crs=26915, merge_coord_info=False) - assert getattr(sg.crs, "srs", None) == "EPSG:26915" - # reset back to test case crs + do_checks(sg, exp_srs="EPSG:26915", exp_epsg=26915) + + # test deprecated 'epsg' parameter + if isinstance(crs, int): + with pytest.deprecated_call(): + sg.set_coord_info(epsg=crs) + do_checks(sg) + + # use 'crs' setter + sg = StructuredGrid(delr=delr, delc=delc) + if HAS_PYPROJ: + sg.crs = crs + do_checks(sg) + else: + if crs is None: + sg.crs = crs + else: + with pytest.warns(): + # cannot set 'crs' property without pyproj + sg.crs = crs + do_checks(sg, exp_epsg=None) + + # unset 'crs', and check that 'epsg' is also none (sometimes) + sg = StructuredGrid(delr=delr, delc=delc, crs=crs) + sg.crs = None + assert sg.crs is None + if HAS_PYPROJ: + # with pyproj, '_epsg' was never populated by getter + assert sg.epsg is None + else: + # without pyproj, '_epsg' is populated from specific 'crs' inputs + assert sg.epsg == expected_epsg + + # unset 'crs', but check that 'epsg' is retained sg = StructuredGrid(delr=delr, delc=delc, crs=crs) + assert sg.epsg == expected_epsg # populate '_epsg' via getter + sg.crs = None + assert sg.crs is None + assert sg.epsg == expected_epsg + + # unset 'epsg' + sg.epsg = None + assert sg.epsg is None + + # unset 'proj4' + sg.proj4 = None + assert sg.proj4 is None + + # set and unset 'prjfile' with a non-existing file + prjfile = "test" + assert sg.prjfile is None + sg.prjfile = prjfile + assert sg.prjfile == prjfile + sg.prjfile = None + assert sg.prjfile is None + + if HAS_PYPROJ and crs is not None: + crs_obj = get_authority_crs(crs) - # test input of projection file - if crs is not None: + # test input of projection file prjfile = function_tmpdir / "grid_crs.prj" - with open(prjfile, "w", encoding="utf-8") as dest: - write_text = sg.crs.to_wkt() - dest.write(write_text) + prjfile.write_text(crs_obj.to_wkt(), encoding="utf-8") + + def do_prjfile_checks(g): + assert isinstance(g.crs, pyproj.CRS) + assert g.crs.srs == expected_srs + assert g.epsg == expected_epsg + assert g.prjfile == prjfile + + # test with 'prjfile' setter and parameter sg = StructuredGrid(delr=delr, delc=delc) sg.prjfile = prjfile - if crs is not None: - assert isinstance(sg.crs, pyproj.CRS) - assert getattr(sg.crs, "srs", None) == expected_srs - assert sg.prjfile == prjfile + do_prjfile_checks(sg) sg.set_coord_info(prjfile=prjfile) - if crs is not None: - assert isinstance(sg.crs, pyproj.CRS) - assert getattr(sg.crs, "srs", None) == expected_srs - assert sg.prjfile == prjfile - - # test setting another crs - sg.crs = 26915 - assert sg.crs == get_authority_crs(26915) - - if ( - version.parse(flopy.__version__) < version.parse("3.4") - and crs is not None - ): - pyproj_crs = get_authority_crs(crs) - epsg = pyproj_crs.to_epsg() - if epsg is not None: + do_prjfile_checks(sg) + + # test with deprecated 'prj' getter/setter + with pytest.deprecated_call(): + assert sg.prj == prjfile + with pytest.deprecated_call(): + sg.prj = prjfile + do_prjfile_checks(sg) + + # test deprecated 'proj4' parameter + with warnings.catch_warnings(): + warnings.simplefilter("ignore") # pyproj warning about conversion + proj4 = crs_obj.to_proj4() + with pytest.deprecated_call(): + sg.set_coord_info(proj4=proj4) + do_checks(sg) + + if HAS_PYPROJ: + # copy previous non-None epsg + prev_epsg = sg.epsg + # test setting another crs + sg.crs = 26915 + assert sg.crs == get_authority_crs(26915) + # note that 'epsg' is not updated by setting 'crs', unless it was None + assert sg.epsg == prev_epsg or 26915 + + if HAS_PYPROJ and crs is not None: + if epsg := crs_obj.to_epsg(): sg = StructuredGrid(delr=delr, delc=delc, crs=crs) sg.epsg = epsg - if crs is not None: - assert isinstance(sg.crs, pyproj.CRS) - assert getattr(sg.crs, "srs", None) == expected_srs - assert sg.epsg == epsg + do_checks(sg, exp_epsg=epsg) + with warnings.catch_warnings(): + warnings.simplefilter("ignore") # pyproj warning about conversion + proj4 = crs_obj.to_proj4() sg = StructuredGrid(delr=delr, delc=delc) - sg.proj4 = pyproj_crs.to_proj4() - if crs is not None: - assert isinstance(sg.crs, pyproj.CRS) - assert getattr(sg.crs, "srs", None) == expected_srs - assert sg.proj4 == pyproj_crs.to_proj4() - - if crs is not None: - sg = StructuredGrid(delr=delr, delc=delc) + sg.proj4 = proj4 + do_checks(sg) + assert sg.proj4 == proj4 + + sg = StructuredGrid(delr=delr, delc=delc) + with pytest.deprecated_call(): sg.prj = prjfile - if crs is not None: - assert isinstance(sg.crs, pyproj.CRS) - assert getattr(sg.crs, "srs", None) == expected_srs + do_checks(sg) + with pytest.deprecated_call(): assert sg.prj == prjfile +def test_grid_crs_exceptions(): + delr = np.ones(10) + delc = np.ones(10) + sg = StructuredGrid(delr=delr, delc=delc, crs="EPSG:26915") + + # test bad 'epsg' parameter + bad_epsg = "EPSG:26915" + with pytest.raises(ValueError): + StructuredGrid(delr=delr, delc=delc, epsg=bad_epsg) + with pytest.raises(ValueError): + sg.epsg = bad_epsg + with pytest.raises(ValueError): + sg.set_coord_info(epsg=bad_epsg) + + # test bad 'proj4' parameter + bad_proj4 = 26915 + with pytest.raises(ValueError): + StructuredGrid(delr=delr, delc=delc, proj4=bad_proj4) + with pytest.raises(ValueError): + sg.proj4 = bad_proj4 + with pytest.raises(ValueError): + sg.set_coord_info(proj4=bad_proj4) + + # test bad 'prjfile' parameter + bad_prjfile = 0 + with pytest.raises(ValueError): + StructuredGrid(delr=delr, delc=delc, prjfile=bad_prjfile) + with pytest.raises(ValueError): + sg.prjfile = bad_prjfile + + # test non-existing file + not_a_file = "not-a-file" + with pytest.raises(FileNotFoundError): + StructuredGrid(delr=delr, delc=delc, prjfile=not_a_file) + # note "sg.prjfile = not_a_file" intentionally does not raise anything + + # test unhandled keyword + with pytest.raises(TypeError): + StructuredGrid(delr=delr, delc=delc, unused_param=None) + + # set_coord_info never had a 'prj' parameter; test it as unhandled keyword + with pytest.raises(TypeError): + sg.set_coord_info(prj=not_a_file) + + def test_tocvfd1(): vertdict = {} vertdict[0] = [(0, 0), (100, 0), (100, 100), (0, 100), (0, 0)] diff --git a/autotest/test_lgr.py b/autotest/test_lgr.py index 2c72b68740..2a01b8ae8c 100644 --- a/autotest/test_lgr.py +++ b/autotest/test_lgr.py @@ -33,7 +33,7 @@ def singleModel( steady, xul, yul, - proj4_str, + crs, mfExe, rundir=".", welInfo=[], @@ -77,7 +77,7 @@ def singleModel( unitnumber=11 + iLUoffset, xul=xul, yul=yul, - proj4_str=proj4_str, + crs=crs, start_datetime="28/2/2019", ) @@ -192,7 +192,7 @@ def test_simple_lgrmodel_from_scratch(function_tmpdir): laytyp = 0 xul_c = 50985.00 yul_c = 416791.06 - proj4_str = "EPSG:28992" + crs = "EPSG:28992" nper = 1 at = 42 perlen = [at] @@ -235,7 +235,7 @@ def test_simple_lgrmodel_from_scratch(function_tmpdir): steady, xul_c, yul_c, - proj4_str, + crs, "mflgr", rundir=function_tmpdir, welInfo=welInfo, @@ -265,7 +265,7 @@ def test_simple_lgrmodel_from_scratch(function_tmpdir): steady, xul_m, yul_m, - proj4_str, + crs, "mflgr", rundir=function_tmpdir, welInfo=welInfo, diff --git a/autotest/test_modflow.py b/autotest/test_modflow.py index 72993c8909..3838ef28cf 100644 --- a/autotest/test_modflow.py +++ b/autotest/test_modflow.py @@ -261,8 +261,7 @@ def test_sr(function_tmpdir): raise AssertionError() if extents[3] != 12355: raise AssertionError() - if mm.modelgrid.crs.srs != "EPSG:26916": - raise AssertionError() + assert mm.modelgrid.epsg == 26916 mm.dis.top = 5000 @@ -543,7 +542,7 @@ def test_read_usgs_model_reference(function_tmpdir, model_reference_path): m2 = Modflow.load("junk.nam", model_ws=ws) m2.modelgrid.read_usgs_model_reference_file(mrf_path) - assert m2.modelgrid.crs.to_epsg() == 26916 + assert m2.modelgrid.epsg == 26916 # have to delete this, otherwise it will mess up other tests to_del = glob.glob(f"{mrf_path}*") for f in to_del: diff --git a/autotest/test_modflowdis.py b/autotest/test_modflowdis.py index eb36dbe3e5..be51b7c014 100644 --- a/autotest/test_modflowdis.py +++ b/autotest/test_modflowdis.py @@ -25,10 +25,12 @@ def test_dis_sr(): rotation=rotation, xul=xul, yul=yul, - proj4_str="epsg:2243", + crs="epsg:2243", ) # Use StructuredGrid instead x, y = bg.modelgrid.get_coords(0, delc * nrow) np.testing.assert_almost_equal(x, xul) np.testing.assert_almost_equal(y, yul) + assert bg.modelgrid.epsg == 2243 + assert bg.modelgrid.angrot == rotation diff --git a/autotest/test_mp6.py b/autotest/test_mp6.py index a42a07f7f4..4637876e01 100644 --- a/autotest/test_mp6.py +++ b/autotest/test_mp6.py @@ -257,8 +257,7 @@ def test_get_destination_data(function_tmpdir, mp6_test_path): xoff=mg.xoffset, yoff=mg.yoffset, angrot=mg.angrot, - epsg=mg.epsg, - proj4=mg.proj4, + crs=mg.epsg, ) ra = shp2recarray(function_tmpdir / "pathlines_1per2.shp") p3_2 = ra.geometry[ra.particleid == 4][0] @@ -298,8 +297,7 @@ def test_get_destination_data(function_tmpdir, mp6_test_path): xoff=mg4._xul_to_xll(xul, 0.0), yoff=mg4._yul_to_yll(yul, 0.0), angrot=0.0, - epsg=mg4.epsg, - proj4=mg4.proj4, + crs=mg4.epsg, ) fpth = function_tmpdir / "dis2.shp" diff --git a/flopy/discretization/grid.py b/flopy/discretization/grid.py index 5da4c9aa6c..cfa72af05e 100644 --- a/flopy/discretization/grid.py +++ b/flopy/discretization/grid.py @@ -1,5 +1,6 @@ import copy import os +import re import warnings from collections import defaultdict @@ -28,6 +29,18 @@ def update_data(self, data): self.out_of_date = False +def _get_epsg_from_crs_or_proj4(crs, proj4=None): + """Try to get EPSG identifier from a crs object.""" + if isinstance(crs, int): + return crs + if isinstance(crs, str): + if match := re.findall(r"epsg:([\d]+)", crs, re.IGNORECASE): + return int(match[0]) + if proj4 and isinstance(proj4, str): + if match := re.findall(r"epsg:([\d]+)", proj4, re.IGNORECASE): + return int(match[0]) + + class Grid: """ Base class for a structured or unstructured model grid @@ -44,7 +57,7 @@ class Grid: ibound/idomain value for each cell lenuni : int or ndarray model length units - crs : pyproj.CRS, optional if `prjfile` is specified + crs : pyproj.CRS, int, str, optional if `prjfile` is specified Coordinate reference system (CRS) for the model grid (must be projected; geographic CRS are not supported). The value can be anything accepted by @@ -61,6 +74,15 @@ class Grid: in the spatial reference coordinate system angrot : float rotation angle of model grid, as it is rotated around the origin point + **kwargs : dict, optional + Support deprecated keyword options. + + .. deprecated:: 3.5 + The following keyword options will be removed for FloPy 3.6: + + - ``prj`` (str or pathlike): use ``prjfile`` instead. + - ``epsg`` (int): use ``crs`` instead. + - ``proj4`` (str): use ``crs`` instead. Attributes ---------- @@ -72,8 +94,6 @@ class Grid: bottom elevations of all cells idomain : int or ndarray ibound/idomain value for each cell - crs : pyproj.CRS - Coordinate reference system (CRS) for the model grid lenuni : int modflow lenuni parameter xoffset : float @@ -148,13 +168,11 @@ def __init__( idomain=None, lenuni=None, crs=None, - epsg=None, - proj4=None, - prj=None, prjfile=None, xoff=0.0, yoff=0.0, angrot=0.0, + **kwargs, ): lenunits = {0: "undefined", 1: "feet", 2: "meters", 3: "centimeters"} LENUNI = {"u": 0, "f": 1, "m": 2, "c": 3} @@ -175,12 +193,28 @@ def __init__( self._lenuni = lenuni self._units = lenunits[self._lenuni] - self._crs = get_crs( - prjfile=prjfile, prj=prj, epsg=epsg, proj4=proj4, crs=crs - ) - self._epsg = epsg - self._proj4 = proj4 - self._prj = prj + # Handle deprecated projection kwargs; warnings are raised in crs.py + self._crs = None + get_crs_args = {"crs": crs, "prjfile": prjfile} + if "epsg" in kwargs: + self.epsg = get_crs_args["epsg"] = kwargs.pop("epsg") + if "proj4" in kwargs: + self.proj4 = get_crs_args["proj4"] = kwargs.pop("proj4") + if "prj" in kwargs: + self.prjfile = get_crs_args["prj"] = kwargs.pop("prj") + if kwargs: + raise TypeError(f"unhandled keywords: {kwargs}") + if prjfile is not None: + self.prjfile = prjfile + try: + self._crs = get_crs(**get_crs_args) + except ImportError: + # provide some support without pyproj by retaining 'epsg' integer + if getattr(self, "_epsg", None) is None: + epsg = _get_epsg_from_crs_or_proj4(crs, self.proj4) + if epsg is not None: + self.epsg = epsg + self._prjfile = prjfile self._xoff = xoff self._yoff = yoff @@ -214,6 +248,10 @@ def __repr__(self): ] if self.crs is not None: items.append(f"crs:{self.crs.srs}") + elif self.epsg is not None: + items.append(f"crs:EPSG:{self.epsg}") + elif self.proj4 is not None: + items.append(f"proj4_str:{self.proj4}") if self.units is not None: items.append(f"units:{self.units}") if self.lenuni is not None: @@ -256,49 +294,121 @@ def angrot_radians(self): @property def crs(self): + """Coordinate reference system (CRS) for the model grid. + + If pyproj is not installed this property is always None; + see :py:attr:`epsg` for an alternative CRS definition. + """ return self._crs @crs.setter def crs(self, crs): - self._crs = get_crs(crs=crs) + if crs is None: + self._crs = None + return + try: + self._crs = get_crs(crs=crs) + except ImportError: + warnings.warn( + "cannot set 'crs' property without pyproj; " + "try setting 'epsg' or 'proj4' instead", + UserWarning, + ) + self._crs = None @property def epsg(self): - if self._crs is not None: - return self._crs.to_epsg() + """EPSG integer code registered to a coordinate reference system. + + This property is derived from :py:attr:`crs` if pyproj is installed, + otherwise it preserved from the constructor. + """ + epsg = None + if hasattr(self, "_epsg"): + epsg = self._epsg + elif self._crs is not None: + epsg = self._crs.to_epsg() + if epsg is not None: + self._epsg = epsg + return epsg @epsg.setter def epsg(self, epsg): - self._crs = get_crs(epsg=epsg) - self._epsg = self._crs.to_epsg() + if not (isinstance(epsg, int) or epsg is None): + raise ValueError("epsg property must be an int or None") + self._epsg = epsg + # If crs was previously unset, use EPSG code + if self._crs is None and epsg is not None: + try: + self._crs = get_crs(crs=epsg) + except ImportError: + pass @property def proj4(self): - if self._crs is not None: - return self._crs.to_proj4() + """PROJ string for a coordinate reference system. + + This property is derived from :py:attr:`crs` if pyproj is installed, + otherwise it preserved from the constructor. + """ + proj4 = None + if hasattr(self, "_proj4"): + proj4 = self._proj4 + elif self._crs is not None: + proj4 = self._crs.to_proj4() + if proj4 is not None: + self._proj4 = proj4 + return proj4 @proj4.setter def proj4(self, proj4): - self._crs = get_crs(proj4=proj4) - self._proj4 = self._crs.to_proj4() + if not (isinstance(proj4, str) or proj4 is None): + raise ValueError("proj4 property must be a str or None") + self._proj4 = proj4 + # If crs was previously unset, use lossy PROJ string + if self._crs is None and proj4 is not None: + try: + self._crs = get_crs(crs=proj4) + except ImportError: + pass @property def prj(self): - return self._prj + warnings.warn( + "prj property is deprecated, use prjfile instead", + PendingDeprecationWarning, + ) + return self.prjfile @prj.setter def prj(self, prj): - self._crs = get_crs(prj=prj) - self._prj = prj + warnings.warn( + "prj property is deprecated, use prjfile instead", + PendingDeprecationWarning, + ) + self.prjfile = prj @property def prjfile(self): - return self._prjfile + """ + Path to a .prj file containing WKT for a coordinate reference system. + """ + return getattr(self, "_prjfile", None) @prjfile.setter def prjfile(self, prjfile): - self._crs = get_crs(prjfile=prjfile) + if prjfile is None: + self._prjfile = None + return + if not isinstance(prjfile, (str, os.PathLike)): + raise ValueError("prjfile property must be str, PathLike or None") self._prjfile = prjfile + # If crs was previously unset, use .prj file input + if self._crs is None: + try: + self._crs = get_crs(prjfile=prjfile) + except (ImportError, FileNotFoundError): + pass @property def top(self): @@ -914,11 +1024,66 @@ def set_coord_info( angrot=None, crs=None, prjfile=None, - epsg=None, - proj4=None, merge_coord_info=True, + **kwargs, ): - new_crs = get_crs(prjfile=prjfile, epsg=epsg, proj4=proj4, crs=crs) + """Set coordinate information for a grid. + + Parameters + ---------- + xoff, yoff : float, optional + X and Y coordinate of the origin point in the spatial reference + coordinate system. + angrot : float, optional + Rotation angle of model grid, as it is rotated around the origin + point. + crs : pyproj.CRS, int, str, optional + Coordinate reference system (CRS) for the model grid + (must be projected; geographic CRS are not supported). + The value can be anything accepted by + :meth:`pyproj.CRS.from_user_input() `, + such as an authority string (eg "EPSG:26916") or a WKT string. + prjfile : str or pathlike, optional + ESRI-style projection file with well-known text defining the CRS + for the model grid (must be projected; geographic CRS are not + supported). + merge_coord_info : bool, default True + If True, retaining previous properties. + If False, overwrite properties with defaults, unless specified. + **kwargs : dict, optional + Support deprecated keyword options. + + .. deprecated:: 3.5 + The following keyword options will be removed for FloPy 3.6: + + - ``epsg`` (int): use ``crs`` instead. + - ``proj4`` (str): use ``crs`` instead. + + """ + if crs is not None: + # Force these to be re-evaluated, if/when needed + if hasattr(self, "_epsg"): + delattr(self, "_epsg") + if hasattr(self, "_proj4"): + delattr(self, "_proj4") + # Handle deprecated projection kwargs; warnings are raised in crs.py + get_crs_args = {"crs": crs, "prjfile": prjfile} + if "epsg" in kwargs: + self.epsg = get_crs_args["epsg"] = kwargs.pop("epsg") + if "proj4" in kwargs: + self.proj4 = get_crs_args["proj4"] = kwargs.pop("proj4") + if kwargs: + raise TypeError(f"unhandled keywords: {kwargs}") + try: + new_crs = get_crs(**get_crs_args) + except ImportError: + new_crs = None + # provide some support without pyproj by retaining 'epsg' integer + if getattr(self, "_epsg", None) is None: + epsg = _get_epsg_from_crs_or_proj4(crs, self.proj4) + if epsg is not None: + self.epsg = epsg + if merge_coord_info: if xoff is None: xoff = self._xoff @@ -940,7 +1105,7 @@ def set_coord_info( self._yoff = yoff self._angrot = angrot self._prjfile = prjfile - self.crs = new_crs + self._crs = new_crs self._require_cache_updates() def load_coord_info(self, namefile=None, reffile="usgs.model.reference"): @@ -963,6 +1128,7 @@ def attribs_from_namfile_header(self, namefile): if namefile is None: return False xul, yul = None, None + set_coord_info_args = {} header = [] with open(namefile) as f: for line in f: @@ -998,11 +1164,18 @@ def attribs_from_namfile_header(self, namefile): self._angrot = float(item.split(":")[1]) except: pass + elif "crs" in item.lower(): + try: + crs = ":".join(item.split(":")[1:]).strip() + if crs.lower() != "none": + set_coord_info_args["crs"] = crs + except: + pass elif "proj4_str" in item.lower(): try: - self._proj4 = ":".join(item.split(":")[1:]).strip() - if self._proj4.lower() == "none": - self._proj4 = None + proj4 = ":".join(item.split(":")[1:]).strip() + if proj4.lower() != "none": + set_coord_info_args["proj4"] = crs except: pass elif "start" in item.lower(): @@ -1014,11 +1187,12 @@ def attribs_from_namfile_header(self, namefile): # we need to rotate the modelgrid first, then we can # calculate the xll and yll from xul and yul if (xul, yul) != (None, None): - self.set_coord_info( - xoff=self._xul_to_xll(xul), - yoff=self._yul_to_yll(yul), - angrot=self._angrot, - ) + set_coord_info_args["xoff"] = self._xul_to_xll(xul) + set_coord_info_args["yoff"] = self._yul_to_yll(yul) + set_coord_info_args["angrot"] = self._angrot + + if set_coord_info_args: + self.set_coord_info(**set_coord_info_args) return True @@ -1046,7 +1220,7 @@ def read_usgs_model_reference_file(self, reffile="usgs.model.reference"): elif info[0] == "rotation": self._angrot = float(data) elif info[0] == "epsg": - self.crs = int(data) + self.epsg = int(data) elif info[0] == "proj4": self.crs = data elif info[0] == "start_date": @@ -1115,21 +1289,32 @@ def _zcoords(self): return zbdryelevs, zcenters # Exporting - def write_shapefile(self, filename="grid.shp", epsg=None, prj=None): + def write_shapefile( + self, filename="grid.shp", crs=None, prjfile=None, **kwargs + ): """ Write a shapefile of the grid with just the row and column attributes. """ from ..export.shapefile_utils import write_grid_shapefile + # Handle deprecated projection kwargs; warnings are raised in crs.py + write_grid_shapefile_args = {} + if "epsg" in kwargs: + write_grid_shapefile_args["epsg"] = kwargs.pop("epsg") + if "prj" in kwargs: + write_grid_shapefile_args["prj"] = kwargs.pop("prj") + if kwargs: + raise TypeError(f"unhandled keywords: {kwargs}") + if crs is None: + crs = self.crs write_grid_shapefile( filename, self, array_dict={}, nan_val=-1.0e9, - crs=self.crs, - epsg=epsg, - prj=prj, + crs=crs, + **write_grid_shapefile_args, ) return diff --git a/flopy/discretization/structuredgrid.py b/flopy/discretization/structuredgrid.py index d1ab5d732c..feb9696b05 100644 --- a/flopy/discretization/structuredgrid.py +++ b/flopy/discretization/structuredgrid.py @@ -97,7 +97,7 @@ class for a structured model grid ibound/idomain value for each cell lenuni : int or ndarray model length units - crs : pyproj.CRS, optional if `prjfile` is specified + crs : pyproj.CRS, int, str, optional if `prjfile` is specified Coordinate reference system (CRS) for the model grid (must be projected; geographic CRS are not supported). The value can be anything accepted by @@ -114,6 +114,15 @@ class for a structured model grid in the spatial reference coordinate system angrot : float rotation angle of model grid, as it is rotated around the origin point + **kwargs : dict, optional + Support deprecated keyword options. + + .. deprecated:: 3.5 + The following keyword options will be removed for FloPy 3.6: + + - ``prj`` (str or pathlike): use ``prjfile`` instead. + - ``epsg`` (int): use ``crs`` instead. + - ``proj4`` (str): use ``crs`` instead. Properties ---------- @@ -146,9 +155,6 @@ def __init__( idomain=None, lenuni=None, crs=None, - epsg=None, - proj4=None, - prj=None, prjfile=None, xoff=0.0, yoff=0.0, @@ -157,21 +163,20 @@ def __init__( nrow=None, ncol=None, laycbd=None, + **kwargs, ): super().__init__( "structured", - top, - botm, - idomain, - lenuni, - crs, - epsg, - proj4, - prj, - prjfile, - xoff, - yoff, - angrot, + top=top, + botm=botm, + idomain=idomain, + lenuni=lenuni, + crs=crs, + prjfile=prjfile, + xoff=xoff, + yoff=yoff, + angrot=angrot, + **kwargs, ) if delc is not None: self.__nrow = len(delc) diff --git a/flopy/discretization/unstructuredgrid.py b/flopy/discretization/unstructuredgrid.py index aca36444b6..21f52a6c0b 100644 --- a/flopy/discretization/unstructuredgrid.py +++ b/flopy/discretization/unstructuredgrid.py @@ -51,7 +51,7 @@ class UnstructuredGrid(Grid): If the model grid defined in verts and iverts applies for all model layers, then the length of iverts can be equal to ncpl[0] and there is no need to repeat all of the vertex information for cells in layers - crs : pyproj.CRS, optional if `prjfile` is specified + crs : pyproj.CRS, int, str, optional if `prjfile` is specified Coordinate reference system (CRS) for the model grid (must be projected; geographic CRS are not supported). The value can be anything accepted by @@ -73,6 +73,15 @@ class UnstructuredGrid(Grid): optional number of connections per node array ja : list or ndarray optional jagged connection array + **kwargs : dict, optional + Support deprecated keyword options. + + .. deprecated:: 3.5 + The following keyword options will be removed for FloPy 3.6: + + - ``prj`` (str or pathlike): use ``prjfile`` instead. + - ``epsg`` (int): use ``crs`` instead. + - ``proj4`` (str): use ``crs`` instead. Properties ---------- @@ -117,30 +126,26 @@ def __init__( lenuni=None, ncpl=None, crs=None, - epsg=None, - proj4=None, - prj=None, prjfile=None, xoff=0.0, yoff=0.0, angrot=0.0, iac=None, ja=None, + **kwargs, ): super().__init__( "unstructured", - top, - botm, - idomain, - lenuni, - crs, - epsg, - proj4, - prj, - prjfile, - xoff, - yoff, - angrot, + top=top, + botm=botm, + idomain=idomain, + lenuni=lenuni, + crs=crs, + prjfile=prjfile, + xoff=xoff, + yoff=yoff, + angrot=angrot, + **kwargs, ) # if any of these are None, then the grid is not valid diff --git a/flopy/discretization/vertexgrid.py b/flopy/discretization/vertexgrid.py index 340439d3b2..9679b4a334 100644 --- a/flopy/discretization/vertexgrid.py +++ b/flopy/discretization/vertexgrid.py @@ -27,7 +27,7 @@ class for a vertex model grid ibound/idomain value for each cell lenuni : int or ndarray model length units - crs : pyproj.CRS, optional if `prjfile` is specified + crs : pyproj.CRS, int, str, optional if `prjfile` is specified Coordinate reference system (CRS) for the model grid (must be projected; geographic CRS are not supported). The value can be anything accepted by @@ -44,6 +44,15 @@ class for a vertex model grid in the spatial reference coordinate system angrot : float rotation angle of model grid, as it is rotated around the origin point + **kwargs : dict, optional + Support deprecated keyword options. + + .. deprecated:: 3.5 + The following keyword options will be removed for FloPy 3.6: + + - ``prj`` (str or pathlike): use ``prjfile`` instead. + - ``epsg`` (int): use ``crs`` instead. + - ``proj4`` (str): use ``crs`` instead. Properties ---------- @@ -68,9 +77,6 @@ def __init__( idomain=None, lenuni=None, crs=None, - epsg=None, - proj4=None, - prj=None, prjfile=None, xoff=0.0, yoff=0.0, @@ -78,21 +84,20 @@ def __init__( nlay=None, ncpl=None, cell1d=None, + **kwargs, ): super().__init__( "vertex", - top, - botm, - idomain, - lenuni, - crs, - epsg, - proj4, - prj, - prjfile, - xoff, - yoff, - angrot, + top=top, + botm=botm, + idomain=idomain, + lenuni=lenuni, + crs=crs, + prjfile=prjfile, + xoff=xoff, + yoff=yoff, + angrot=angrot, + **kwargs, ) self._vertices = vertices self._cell1d = cell1d diff --git a/flopy/export/shapefile_utils.py b/flopy/export/shapefile_utils.py index 3e31211f3e..86cb533320 100644 --- a/flopy/export/shapefile_utils.py +++ b/flopy/export/shapefile_utils.py @@ -55,7 +55,10 @@ def write_gridlines_shapefile(filename: Union[str, os.PathLike], mg): wr.record(i) wr.close() - write_prj(filename, modelgrid=mg) + try: + write_prj(filename, modelgrid=mg) + except ImportError: + pass return @@ -65,10 +68,9 @@ def write_grid_shapefile( array_dict, nan_val=np.nan, crs=None, - prjfile=None, - epsg=None, - prj: Optional[Union[str, os.PathLike]] = None, + prjfile: Optional[Union[str, os.PathLike]] = None, verbose=False, + **kwargs, ): """ Method to write a shapefile of gridded input data @@ -83,7 +85,7 @@ def write_grid_shapefile( dictionary of model input arrays nan_val : float value to fill nans - crs : pyproj.CRS, optional if `prjfile` is specified + crs : pyproj.CRS, int, str, optional if `prjfile` is specified Coordinate reference system (CRS) for the model grid (must be projected; geographic CRS are not supported). The value can be anything accepted by @@ -92,6 +94,14 @@ def write_grid_shapefile( prjfile : str or pathlike, optional if `crs` is specified ESRI-style projection file with well-known text defining the CRS for the model grid (must be projected; geographic CRS are not supported). + **kwargs : dict, optional + Support deprecated keyword options. + + .. deprecated:: 3.5 + The following keyword options will be removed for FloPy 3.6: + + - ``epsg`` (int): use ``crs`` instead. + - ``prj`` (str or PathLike): use ``prjfile`` instead. Returns ------- @@ -213,8 +223,20 @@ def write_grid_shapefile( if verbose: print(f"wrote {flopy_io.relpath_safe(path)}") + # handle deprecated projection kwargs; warnings are raised in crs.py + write_prj_args = {} + if "epsg" in kwargs: + write_prj_args["epsg"] = kwargs.pop("epsg") + if "prj" in kwargs: + write_prj_args["prj"] = kwargs.pop("prj") + if kwargs: + raise TypeError(f"unhandled keywords: {kwargs}") # write the projection file - write_prj(path, mg, crs=crs, epsg=epsg, prj=prj, prjfile=prjfile) + try: + write_prj(path, mg, crs=crs, prjfile=prjfile, **write_prj_args) + except ImportError: + if verbose: + print("projection file not written") return @@ -248,7 +270,7 @@ def model_attributes_to_shapefile( modelgrid : fp.modflow.Grid object if modelgrid is supplied, user supplied modelgrid is used in lieu of the modelgrid attached to the modflow model object - crs : pyproj.CRS, optional if `prjfile` is specified + crs : pyproj.CRS, int, str, optional if `prjfile` is specified Coordinate reference system (CRS) for the model grid (must be projected; geographic CRS are not supported). The value can be anything accepted by @@ -403,7 +425,10 @@ def model_attributes_to_shapefile( write_grid_shapefile(path, grid, array_dict) crs = kwargs.get("crs", None) prjfile = kwargs.get("prjfile", None) - write_prj(path, grid, crs=crs, prjfile=prjfile) + try: + write_prj(path, grid, crs=crs, prjfile=prjfile) + except ImportError: + pass def shape_attr_name(name, length=6, keep_layer=False): @@ -547,9 +572,7 @@ def recarray2shp( shpname: Union[str, os.PathLike] = "recarray.shp", mg=None, crs=None, - prjfile=None, - epsg=None, - prj: Optional[Union[str, os.PathLike]] = None, + prjfile: Optional[Union[str, os.PathLike]] = None, verbose=False, **kwargs, ): @@ -571,7 +594,9 @@ def recarray2shp( recarray. shpname : str or PathLike, default "recarray.shp" Path for the output shapefile - crs : pyproj.CRS, optional if `prjfile` is specified + mg : flopy.discretization.Grid object + flopy model grid + crs : pyproj.CRS, int, str, optional if `prjfile` is specified Coordinate reference system (CRS) for the model grid (must be projected; geographic CRS are not supported). The value can be anything accepted by @@ -580,10 +605,18 @@ def recarray2shp( prjfile : str or pathlike, optional if `crs` is specified ESRI-style projection file with well-known text defining the CRS for the model grid (must be projected; geographic CRS are not supported). + **kwargs : dict, optional + Support deprecated keyword options. + + .. deprecated:: 3.5 + The following keyword options will be removed for FloPy 3.6: + + - ``epsg`` (int): use ``crs`` instead. + - ``prj`` (str or PathLike): use ``prjfile`` instead. Notes ----- - Uses pyshp. + Uses pyshp and optionally pyproj. """ from ..utils.geospatial_utils import GeoSpatialCollection @@ -637,8 +670,23 @@ def recarray2shp( w.record(*r) w.close() - write_prj(shpname, mg, crs=crs, epsg=epsg, prj=prj, prjfile=prjfile) - print(f"wrote {flopy_io.relpath_safe(os.getcwd(), shpname)}") + if verbose: + print(f"wrote {flopy_io.relpath_safe(os.getcwd(), shpname)}") + + # handle deprecated projection kwargs; warnings are raised in crs.py + write_prj_args = {} + if "epsg" in kwargs: + write_prj_args["epsg"] = kwargs.pop("epsg") + if "prj" in kwargs: + write_prj_args["prj"] = kwargs.pop("prj") + if kwargs: + raise TypeError(f"unhandled keywords: {kwargs}") + # write the projection file + try: + write_prj(shpname, mg, crs=crs, prjfile=prjfile, **write_prj_args) + except ImportError: + if verbose: + print("projection file not written") return @@ -646,26 +694,30 @@ def write_prj( shpname, modelgrid=None, crs=None, - epsg=None, - prj=None, prjfile=None, - wkt_string=None, + **kwargs, ): # projection file name output_projection_file = Path(shpname).with_suffix(".prj") - crs = get_crs( - prjfile=prjfile, prj=prj, epsg=epsg, crs=crs, wkt_string=wkt_string - ) + # handle deprecated projection kwargs; warnings are raised in crs.py + get_crs_args = {} + if "epsg" in kwargs: + get_crs_args["epsg"] = kwargs.pop("epsg") + if "prj" in kwargs: + get_crs_args["prj"] = kwargs.pop("prj") + if "wkt_string" in kwargs: + get_crs_args["wkt_string"] = kwargs.pop("wkt_string") + if kwargs: + raise TypeError(f"unhandled keywords: {kwargs}") + crs = get_crs(prjfile=prjfile, crs=crs, **get_crs_args) if crs is None and modelgrid is not None: crs = modelgrid.crs if crs is not None: - with open(output_projection_file, "w", encoding="utf-8") as dest: - write_text = crs.to_wkt() - dest.write(write_text) + output_projection_file.write_text(crs.to_wkt(), encoding="utf-8") else: print( "No CRS information for writing a .prj file.\n" - "Supply an valid coordinate system reference to the attached modelgrid object " - "or .export() method." + "Supply an valid coordinate system reference to the attached " + "modelgrid object or .export() method." ) diff --git a/flopy/export/utils.py b/flopy/export/utils.py index 3b94cb10a9..81d02f0771 100644 --- a/flopy/export/utils.py +++ b/flopy/export/utils.py @@ -596,7 +596,7 @@ def model_export( modelgrid: flopy.discretization.Grid user supplied modelgrid object which will supercede the built in modelgrid object - crs : pyproj.CRS, optional if `prjfile` is specified + crs : pyproj.CRS, int, str, optional if `prjfile` is specified Coordinate reference system (CRS) for the model grid (must be projected; geographic CRS are not supported). The value can be anything accepted by @@ -689,7 +689,7 @@ def package_export( modelgrid: flopy.discretization.Grid user supplied modelgrid object which will supercede the built in modelgrid object - crs : pyproj.CRS, optional if `prjfile` is specified + crs : pyproj.CRS, int, str, optional if `prjfile` is specified Coordinate reference system (CRS) for the model grid (must be projected; geographic CRS are not supported). The value can be anything accepted by @@ -888,7 +888,7 @@ def mflist_export(f: Union[str, os.PathLike, NetCdf], mfl, **kwargs): **kwargs : keyword arguments modelgrid : flopy.discretization.Grid model grid instance which will supercede the flopy.model.modelgrid - crs : pyproj.CRS, optional if `prjfile` is specified + crs : pyproj.CRS, int, str, optional if `prjfile` is specified Coordinate reference system (CRS) for the model grid (must be projected; geographic CRS are not supported). The value can be anything accepted by @@ -1679,7 +1679,10 @@ def export_array( elif filename.lower().endswith(".shp"): from ..export.shapefile_utils import write_grid_shapefile - crs = get_crs(**kwargs) + try: + crs = get_crs(**kwargs) + except ImportError: + crs = None write_grid_shapefile( filename, modelgrid, @@ -1866,7 +1869,7 @@ def export_array_contours( ax = plt.subplots()[-1] ctr = contour_array(modelgrid, ax, a, levels=levels) - kwargs["modelgrid"] = modelgrid + kwargs["mg"] = modelgrid export_contours(filename, ctr, fieldname, **kwargs) plt.close() diff --git a/flopy/mf6/mfmodel.py b/flopy/mf6/mfmodel.py index be20e0c849..dcab7fdea9 100644 --- a/flopy/mf6/mfmodel.py +++ b/flopy/mf6/mfmodel.py @@ -640,10 +640,6 @@ def export(self, f, **kwargs): modelgrid: flopy.discretization.Grid User supplied modelgrid object which will supercede the built in modelgrid object - epsg : int - EPSG projection code - prj : str - The prj file name if fmt is set to 'vtk', parameters of vtk.export_model """ diff --git a/flopy/modflow/mf.py b/flopy/modflow/mf.py index 9962f48583..ea1ec9974e 100644 --- a/flopy/modflow/mf.py +++ b/flopy/modflow/mf.py @@ -279,6 +279,12 @@ def modelgrid(self): else: ibound = None + common_kwargs = { + "crs": self._modelgrid.crs or self._modelgrid.epsg, + "xoff": self._modelgrid.xoffset, + "yoff": self._modelgrid.yoffset, + "angrot": self._modelgrid.angrot, + } if self.get_package("disu") is not None: # build unstructured grid self._modelgrid = UnstructuredGrid( @@ -292,12 +298,9 @@ def modelgrid(self): botm=self.disu.bot.array, idomain=ibound, lenuni=self.disu.lenuni, - crs=self._modelgrid.crs, - xoff=self._modelgrid.xoffset, - yoff=self._modelgrid.yoffset, - angrot=self._modelgrid.angrot, iac=self.disu.iac.array, ja=self.disu.ja.array, + **common_kwargs, ) print( "WARNING: Model grid functionality limited for unstructured " @@ -312,12 +315,9 @@ def modelgrid(self): self.dis.botm.array, ibound, self.dis.lenuni, - crs=self._modelgrid.crs, - xoff=self._modelgrid.xoffset, - yoff=self._modelgrid.yoffset, - angrot=self._modelgrid.angrot, nlay=self.dis.nlay, laycbd=self.dis.laycbd.array, + **common_kwargs, ) # resolve offsets @@ -337,7 +337,7 @@ def modelgrid(self): xoff, yoff, self._modelgrid.angrot, - self._modelgrid.crs, + self._modelgrid.crs or self._modelgrid.epsg, ) self._mg_resync = not self._modelgrid.is_complete return self._modelgrid diff --git a/flopy/modflow/mfdis.py b/flopy/modflow/mfdis.py index 85625f3af8..4797d55cac 100644 --- a/flopy/modflow/mfdis.py +++ b/flopy/modflow/mfdis.py @@ -88,7 +88,7 @@ class ModflowDis(Package): rotation : float counter-clockwise rotation (in degrees) of the grid about the lower- left corner. default is 0.0 - crs : pyproj.CRS, optional if `prjfile` is specified + crs : pyproj.CRS, int, str, optional if `prjfile` is specified Coordinate reference system (CRS) for the model grid (must be projected; geographic CRS are not supported). The value can be anything accepted by @@ -99,6 +99,11 @@ class ModflowDis(Package): for the model grid (must be projected; geographic CRS are not supported). start_datetime : str starting datetime of the simulation. default is '1/1/1970' + **kwargs : dict, optional + Support deprecated keyword options. + + .. deprecated:: 3.5 + ``proj4_str`` will be removed for FloPy 3.6, use ``crs`` instead. Attributes ---------- @@ -147,10 +152,10 @@ def __init__( xul=None, yul=None, rotation=None, - proj4_str=None, crs=None, prjfile=None, start_datetime=None, + **kwargs, ): # set default unit number of one is not specified if unitnumber is None: @@ -242,30 +247,37 @@ def __init__( 5: "years", } - if xul is None: - xul = model._xul - if yul is None: - yul = model._yul - if rotation is None: - rotation = model._rotation - crs = get_crs(prjfile=prjfile, proj4=proj4_str, crs=crs) - if crs is None: - crs = model._crs - if start_datetime is None: - start_datetime = model._start_datetime - # set the model grid coordinate info - xll = None - yll = None mg = model.modelgrid + if rotation is None: + rotation = model._rotation if rotation is not None: - mg.set_coord_info(xoff=None, yoff=None, angrot=rotation) + # set rotation before anything else + mg.set_coord_info(angrot=rotation) + set_coord_info_args = {"crs": crs, "prjfile": prjfile} + if "proj4_str" in kwargs: + warnings.warn( + "the proj4_str argument will be deprecated and will be " + "removed in version 3.6. Use crs instead.", + PendingDeprecationWarning, + ) + proj4_str = kwargs.pop("proj4_str") + if crs is None: + set_coord_info_args["crs"] = proj4_str + if xul is None: + xul = model._xul if xul is not None: - xll = mg._xul_to_xll(xul) + set_coord_info_args["xoff"] = mg._xul_to_xll(xul) + if yul is None: + yul = model._yul if yul is not None: - yll = mg._yul_to_yll(yul) - mg.set_coord_info(xoff=xll, yoff=yll, angrot=rotation, crs=crs) + set_coord_info_args["yoff"] = mg._yul_to_yll(yul) + + # set all other relevant coordinate properties + model._modelgrid.set_coord_info(**set_coord_info_args) + if start_datetime is None: + start_datetime = model._start_datetime self.tr = TemporalReference( itmuni=self.itmuni, start_datetime=start_datetime ) diff --git a/flopy/utils/crs.py b/flopy/utils/crs.py index a41dcaa141..63fd90e84a 100644 --- a/flopy/utils/crs.py +++ b/flopy/utils/crs.py @@ -13,7 +13,7 @@ def get_authority_crs(crs): Parameters ---------- - crs : pyproj.CRS + crs : pyproj.CRS, int, str Coordinate reference system (CRS) for the model grid (must be projected; geographic CRS are not supported). The value can be anything accepted by @@ -22,9 +22,9 @@ def get_authority_crs(crs): Returns ------- - authority_crs : pyproj.CRS instance + pyproj.CRS instance CRS instance initiallized with the name - and authority code (e.g. epsg: 5070) produced by + and authority code (e.g. "epsg:5070") produced by :meth:`pyproj.crs.CRS.to_authority` Notes @@ -53,27 +53,30 @@ def get_shapefile_crs(shapefile): Parameters ---------- shapefile : str or pathlike - Path to a shapefile or an associated - projection (.prj) file. + Path to a shapefile or an associated projection (.prj) file. Returns ------- - crs : pyproj.CRS instance + pyproj.CRS instance """ - pyproj = import_optional_dependency("pyproj") shapefile = Path(shapefile) prjfile = shapefile.with_suffix(".prj") if prjfile.exists(): - with open(prjfile, encoding="utf-8") as src: - wkt = src.read() - crs = pyproj.crs.CRS.from_wkt(wkt) - return get_authority_crs(crs) + pyproj = import_optional_dependency("pyproj") + wkt = prjfile.read_text(encoding="utf-8") + crs = pyproj.crs.CRS.from_wkt(wkt) + return get_authority_crs(crs) + else: + raise FileNotFoundError(f"could not find {prjfile}") def get_crs(prjfile=None, crs=None, **kwargs): - """Helper function to produce a pyproj.CRS object from - various input. Longer-term, this would just handle the ``crs`` + """Helper function to produce a pyproj.CRS object from various input. + + Notes + ----- + Longer-term, this would just handle the ``crs`` and ``prjfile`` arguments, but in the near term, we need to warn users about deprecating the ``prj``, ``epsg``, ``proj4`` and ``wkt_string`` inputs. @@ -81,49 +84,49 @@ def get_crs(prjfile=None, crs=None, **kwargs): Parameters ---------- prjfile : str or pathlike, optional - _description_, by default None - prj : str or pathlike, optional - .. deprecated:: 3.4 - use ``prjfile`` instead. - epsg : int, optional - .. deprecated:: 3.4 - use ``crs`` instead. - proj4 : str, optional - .. deprecated:: 3.4 - use ``crs`` instead. - crs : pyproj.CRS, optional if `prjfile` is specified + ESRI-style projection file with well-known text defining the CRS + for the model grid (must be projected; geographic CRS are not supported). + crs : pyproj.CRS, int, str, optional if `prjfile` is specified Coordinate reference system (CRS) for the model grid (must be projected; geographic CRS are not supported). The value can be anything accepted by :meth:`pyproj.CRS.from_user_input() `, such as an authority string (eg "EPSG:26916") or a WKT string. - wkt_string : str, optional + **kwargs : dict, optional + Support deprecated keyword options. + .. deprecated:: 3.4 - use ``crs`` instead. + The following keyword options will be removed for FloPy 3.6: + + - ``prj`` (str or pathlike): use ``prjfile`` instead. + - ``epsg`` (int): use ``crs`` instead. + - ``proj4`` (str): use ``crs`` instead. + - ``wkt_string`` (str): use ``crs`` instead. Returns ------- - crs : pyproj.CRS instance + pyproj.CRS instance """ if crs is not None: crs = get_authority_crs(crs) - if kwargs.get("prj") is not None: + if "prj" in kwargs: warnings.warn( - "the prj argument will be deprecated and will be removed in version " - "3.4. Use prjfile instead.", + "the prj argument will be deprecated and will be removed in " + "version 3.6. Use prjfile instead.", PendingDeprecationWarning, ) - prjfile = kwargs.get("prj") - if kwargs.get("epsg") is not None: + prjfile = kwargs.pop("prj") + if "epsg" in kwargs: warnings.warn( - "the epsg argument will be deprecated and will be removed in version " - "3.4. Use crs instead.", + "the epsg argument will be deprecated and will be removed in " + "version 3.6. Use crs instead.", PendingDeprecationWarning, ) + epsg = kwargs.pop("epsg") if crs is None: - crs = get_authority_crs(kwargs.get("epsg")) - elif prjfile is not None: + crs = get_authority_crs(epsg) + if prjfile is not None: prjfile_crs = get_shapefile_crs(prjfile) if (crs is not None) and (crs != prjfile_crs): raise ValueError( @@ -133,17 +136,21 @@ def get_crs(prjfile=None, crs=None, **kwargs): ) else: crs = prjfile_crs - elif kwargs.get("proj4") is not None: + if "proj4" in kwargs: warnings.warn( - "the proj4 argument will be deprecated and will be removed in version " - "3.4. Use crs instead.", + "the proj4 argument will be deprecated and will be removed in " + "version 3.6. Use crs instead.", PendingDeprecationWarning, ) + proj4 = kwargs.pop("proj4") if crs is None: - crs = get_authority_crs(kwargs.get("proj4")) - elif kwargs.get("wkt_string") is not None: + crs = get_authority_crs(proj4) + if "wkt_string" in kwargs: + wkt_string = kwargs.pop("wkt_string") if crs is None: - crs = get_authority_crs(kwargs.get("wkt_string")) + crs = get_authority_crs(wkt_string) + if kwargs: + raise TypeError(f"unhandled keywords: {kwargs}") if crs is not None and not crs.is_projected: raise ValueError( f"Only projected coordinate reference systems are supported.\n{crs}" diff --git a/flopy/utils/mfreadnam.py b/flopy/utils/mfreadnam.py index be900cde26..d51854fffe 100644 --- a/flopy/utils/mfreadnam.py +++ b/flopy/utils/mfreadnam.py @@ -213,7 +213,6 @@ def attribs_from_namfile_header(namefile): "yul": None, "rotation": 0.0, "crs": None, - "proj4_str": None, } if namefile is None: return defaults @@ -256,21 +255,22 @@ def attribs_from_namfile_header(namefile): except: print(f" could not parse rotation in {namefile}") elif "proj4_str" in item.lower(): + # deprecated, use "crs" instead try: proj4 = ":".join(item.split(":")[1:]).strip() if proj4.lower() == "none": proj4 = None - defaults["crs"] = proj4 + defaults["proj4_str"] = proj4 except: print(f" could not parse proj4_str in {namefile}") elif "crs" in item.lower(): try: crs = ":".join(item.split(":")[1:]).strip() if crs.lower() == "none": - proj4 = None + crs = None defaults["crs"] = crs except: - print(f" could not parse proj4_str in {namefile}") + print(f" could not parse crs in {namefile}") elif "start" in item.lower(): try: start_datetime = item.split(":")[1].strip() diff --git a/flopy/utils/modpathfile.py b/flopy/utils/modpathfile.py index 3a1a70193a..7931abaf9c 100644 --- a/flopy/utils/modpathfile.py +++ b/flopy/utils/modpathfile.py @@ -272,7 +272,7 @@ def write_shapefile( direction="ending", shpname="endpoints.shp", mg=None, - epsg=None, + crs=None, **kwargs, ): """ @@ -295,12 +295,19 @@ def write_shapefile( File path for shapefile mg : flopy.discretization.grid instance Used to scale and rotate Global x,y,z values. - epsg : int - EPSG code for writing projection (.prj) file. If this is not - supplied, the proj4 string or epgs code associated with mg will be - used. + crs : pyproj.CRS, int, str, optional + Coordinate reference system (CRS) for the model grid + (must be projected; geographic CRS are not supported). + The value can be anything accepted by + :meth:`pyproj.CRS.from_user_input() `, + such as an authority string (eg "EPSG:26916") or a WKT string. kwargs : keyword arguments to flopy.export.shapefile_utils.recarray2shp + .. deprecated:: 3.5 + The following keyword options will be removed for FloPy 3.6: + + - ``epsg`` (int): use ``crs`` instead. + """ from ..discretization import StructuredGrid from ..export.shapefile_utils import recarray2shp @@ -325,9 +332,6 @@ def write_shapefile( if mg is None: raise ValueError("A modelgrid object was not provided.") - if epsg is None: - epsg = mg.epsg - particles = np.unique(series.particleid) geoms = [] @@ -406,7 +410,7 @@ def write_shapefile( sdata[n] += 1 # write the final recarray to a shapefile - recarray2shp(sdata, geoms, shpname=shpname, epsg=epsg, **kwargs) + recarray2shp(sdata, geoms, shpname=shpname, crs=crs, **kwargs) class PathlineFile(_ModpathSeries): @@ -744,7 +748,7 @@ def write_shapefile( direction="ending", shpname="pathlines.shp", mg=None, - epsg=None, + crs=None, **kwargs, ): """ @@ -769,12 +773,19 @@ def write_shapefile( mg : flopy.discretization.grid instance Used to scale and rotate Global x,y,z values in MODPATH Pathline file. - epsg : int - EPSG code for writing projection (.prj) file. If this is not - supplied, the proj4 string or epgs code associated with mg will be - used. + crs : pyproj.CRS, int, str, optional + Coordinate reference system (CRS) for the model grid + (must be projected; geographic CRS are not supported). + The value can be anything accepted by + :meth:`pyproj.CRS.from_user_input() `, + such as an authority string (eg "EPSG:26916") or a WKT string. kwargs : keyword arguments to flopy.export.shapefile_utils.recarray2shp + .. deprecated:: 3.5 + The following keyword options will be removed for FloPy 3.6: + + - ``epsg`` (int): use ``crs`` instead. + """ super().write_shapefile( data=pathline_data, @@ -782,7 +793,7 @@ def write_shapefile( direction=direction, shpname=shpname, mg=mg, - epsg=epsg, + crs=crs, **kwargs, ) @@ -1191,7 +1202,7 @@ def write_shapefile( shpname="endpoints.shp", direction="ending", mg=None, - epsg=None, + crs=None, **kwargs, ): """ @@ -1208,12 +1219,19 @@ def write_shapefile( mg : flopy.discretization.grid instance Used to scale and rotate Global x,y,z values in MODPATH Endpoint file. - epsg : int - EPSG code for writing projection (.prj) file. If this is not - supplied, the proj4 string or epgs code associated with mg will be - used. + crs : pyproj.CRS, int, str, optional + Coordinate reference system (CRS) for the model grid + (must be projected; geographic CRS are not supported). + The value can be anything accepted by + :meth:`pyproj.CRS.from_user_input() `, + such as an authority string (eg "EPSG:26916") or a WKT string. kwargs : keyword arguments to flopy.export.shapefile_utils.recarray2shp + .. deprecated:: 3.5 + The following keyword options will be removed for FloPy 3.6: + + - ``epsg`` (int): use ``crs`` instead. + """ from ..discretization import StructuredGrid from ..export.shapefile_utils import recarray2shp @@ -1235,8 +1253,6 @@ def write_shapefile( ) if mg is None: raise ValueError("A modelgrid object was not provided.") - if epsg is None: - epsg = mg.epsg if isinstance(mg, StructuredGrid): x, y = geometry.transform( @@ -1255,7 +1271,7 @@ def write_shapefile( for n in self.kijnames: if n in epd.dtype.names: epd[n] += 1 - recarray2shp(epd, geoms, shpname=shpname, epsg=epsg, **kwargs) + recarray2shp(epd, geoms, shpname=shpname, crs=crs, **kwargs) class TimeseriesFile(_ModpathSeries): @@ -1583,7 +1599,7 @@ def write_shapefile( direction="ending", shpname="pathlines.shp", mg=None, - epsg=None, + crs=None, **kwargs, ): """ @@ -1608,12 +1624,19 @@ def write_shapefile( mg : flopy.discretization.grid instance Used to scale and rotate Global x,y,z values in MODPATH Timeseries file. - epsg : int - EPSG code for writing projection (.prj) file. If this is not - supplied, the proj4 string or epgs code associated with mg will be - used. + crs : pyproj.CRS, int, str, optional + Coordinate reference system (CRS) for the model grid + (must be projected; geographic CRS are not supported). + The value can be anything accepted by + :meth:`pyproj.CRS.from_user_input() `, + such as an authority string (eg "EPSG:26916") or a WKT string. kwargs : keyword arguments to flopy.export.shapefile_utils.recarray2shp + .. deprecated:: 3.5 + The following keyword options will be removed for FloPy 3.6: + + - ``epsg`` (int): use ``crs`` instead. + """ super().write_shapefile( data=timeseries_data, @@ -1621,6 +1644,6 @@ def write_shapefile( direction=direction, shpname=shpname, mg=mg, - epsg=epsg, + crs=crs, **kwargs, ) From 8b03916b55fb7a923d13648237147124d77f09f5 Mon Sep 17 00:00:00 2001 From: spaulins-usgs Date: Thu, 20 Jul 2023 07:28:06 -0700 Subject: [PATCH 013/101] fix(time series): fix for multiple time series attached to single package (#1867) (#1873) Co-authored-by: scottrp <45947939+scottrp@users.noreply.github.com> --- autotest/regression/test_mf6.py | 204 ++++++++++++++++++++++++++++++++ flopy/mf6/mfpackage.py | 35 ++++-- 2 files changed, 229 insertions(+), 10 deletions(-) diff --git a/autotest/regression/test_mf6.py b/autotest/regression/test_mf6.py index 3ffc52f9a0..2f8b1a2418 100644 --- a/autotest/regression/test_mf6.py +++ b/autotest/regression/test_mf6.py @@ -54,6 +54,210 @@ pytestmark = pytest.mark.mf6 +@requires_exe("mf6") +@pytest.mark.regression +def test_ts(function_tmpdir, example_data_path): + ws = function_tmpdir / "ws" + name = "test_ts" + + # create the flopy simulation and tdis objects + sim = flopy.mf6.MFSimulation( + sim_name=name, exe_name="mf6", version="mf6", sim_ws=ws + ) + tdis_rc = [(1.0, 1, 1.0), (10.0, 5, 1.0), (10.0, 5, 1.0), (10.0, 1, 1.0)] + tdis_package = flopy.mf6.modflow.mftdis.ModflowTdis( + sim, time_units="DAYS", nper=4, perioddata=tdis_rc + ) + # create the Flopy groundwater flow (gwf) model object + model_nam_file = f"{name}.nam" + gwf = flopy.mf6.ModflowGwf( + sim, modelname=name, model_nam_file=model_nam_file + ) + # create the flopy iterative model solver (ims) package object + ims = flopy.mf6.modflow.mfims.ModflowIms( + sim, pname="ims", complexity="SIMPLE" + ) + # create the discretization package + bot = np.linspace(-3.0, -50.0 / 3.0, 3) + delrow = delcol = 4.0 + dis = flopy.mf6.modflow.mfgwfdis.ModflowGwfdis( + gwf, + pname="dis", + nogrb=True, + nlay=3, + nrow=101, + ncol=101, + delr=delrow, + delc=delcol, + top=0.0, + botm=bot, + ) + # create the initial condition (ic) and node property flow (npf) packages + ic_package = flopy.mf6.modflow.mfgwfic.ModflowGwfic(gwf, strt=50.0) + npf_package = flopy.mf6.modflow.mfgwfnpf.ModflowGwfnpf( + gwf, + save_flows=True, + icelltype=[1, 0, 0], + k=[5.0, 0.1, 4.0], + k33=[0.5, 0.005, 0.1], + ) + oc = ModflowGwfoc( + gwf, + budget_filerecord=[(f"{name}.cbc",)], + head_filerecord=[(f"{name}.hds",)], + saverecord={ + 0: [("HEAD", "ALL"), ("BUDGET", "ALL")], + 1: [], + }, + printrecord=[("HEAD", "ALL")], + ) + + # build ghb stress period data + ghb_spd_ts = {} + ghb_period = [] + for layer, cond in zip(range(1, 3), [15.0, 1500.0]): + for row in range(0, 15): + ghb_period.append(((layer, row, 9), "tides", cond, "Estuary-L2")) + ghb_spd_ts[0] = ghb_period + + # build ts data + ts_data = [] + for n in range(0, 365): + time = float(n / 11.73) + val = float(n / 60.0) + ts_data.append((time, val)) + ts_dict = { + "filename": "tides.ts", + "time_series_namerecord": "tide", + "timeseries": ts_data, + "interpolation_methodrecord": "linearend", + "sfacrecord": 1.1, + } + + # build ghb package + ghb = flopy.mf6.modflow.mfgwfghb.ModflowGwfghb( + gwf, + print_input=True, + print_flows=True, + save_flows=True, + boundnames=True, + timeseries=ts_dict, + pname="ghb", + maxbound=30, + stress_period_data=ghb_spd_ts, + ) + + # set required time series attributes + ghb.ts.time_series_namerecord = "tides" + + # clean up for next example + gwf.remove_package("ghb") + + # build ghb stress period data + ghb_spd_ts = {} + ghb_period = [] + for layer, cond in zip(range(1, 3), [15.0, 1500.0]): + for row in range(0, 15): + if row < 10: + ghb_period.append( + ((layer, row, 9), "tides", cond, "Estuary-L2") + ) + else: + ghb_period.append(((layer, row, 9), "wl", cond, "Estuary-L2")) + ghb_spd_ts[0] = ghb_period + + # build ts data + ts_data = [] + for n in range(0, 365): + time = float(n / 11.73) + val = float(n / 60.0) + ts_data.append((time, val)) + ts_data2 = [] + for n in range(0, 365): + time = float(n / 11.73) + val = float(n / 30.0) + ts_data2.append((time, val)) + ts_data3 = [] + for n in range(0, 365): + time = float(n / 11.73) + val = float(n / 20.0) + ts_data3.append((time, val)) + + # build ghb package + ghb = flopy.mf6.modflow.mfgwfghb.ModflowGwfghb( + gwf, + print_input=True, + print_flows=True, + save_flows=True, + boundnames=True, + pname="ghb", + maxbound=30, + stress_period_data=ghb_spd_ts, + ) + + # initialize first time series + ghb.ts.initialize( + filename="tides.ts", + timeseries=ts_data, + time_series_namerecord="tides", + interpolation_methodrecord="linearend", + sfacrecord=1.1, + ) + + # append additional time series + ghb.ts.append_package( + filename="wls.ts", + timeseries=ts_data2, + time_series_namerecord="wl", + interpolation_methodrecord="stepwise", + sfacrecord=1.2, + ) + # append additional time series + ghb.ts.append_package( + filename="wls2.ts", + timeseries=ts_data3, + time_series_namerecord="wl2", + interpolation_methodrecord="stepwise", + sfacrecord=1.3, + ) + + sim.write_simulation() + ret = sim.run_simulation() + assert ret + sim2 = flopy.mf6.MFSimulation.load("mfsim.nam", sim_ws=ws, exe_name="mf6") + sim2_ws = os.path.join(ws, "2") + sim2.set_sim_path(sim2_ws) + sim2.write_simulation() + ret = sim2.run_simulation() + assert ret + + # compare datasets + model2 = sim2.get_model() + ghb_m2 = model2.get_package("ghb") + wls_m2 = ghb_m2.ts[1] + wls_m1 = ghb.ts[1] + + ts_m1 = wls_m1.timeseries.get_data() + ts_m2 = wls_m2.timeseries.get_data() + + assert ts_m1[0][1] == 0.0 + assert ts_m1[30][1] == 1.0 + for m1_line, m2_line in zip(ts_m1, ts_m2): + assert abs(m1_line[1] - m2_line[1]) < 0.000001 + + # compare output to expected results + head_1 = os.path.join(ws, f"{name}.hds") + head_2 = os.path.join(sim2_ws, f"{name}.hds") + outfile = os.path.join(ws, "head_compare.dat") + assert compare_heads( + None, + None, + files1=[head_1], + files2=[head_2], + outfile=outfile, + ) + + @requires_exe("mf6") @pytest.mark.regression def test_np001(function_tmpdir, example_data_path): diff --git a/flopy/mf6/mfpackage.py b/flopy/mf6/mfpackage.py index d62fd71e2f..ed5bcb8e84 100644 --- a/flopy/mf6/mfpackage.py +++ b/flopy/mf6/mfpackage.py @@ -1739,7 +1739,7 @@ def __init__( ): # initialize as part of the parent's child package group chld_pkg_grp = self.parent_file._child_package_groups[package_type] - chld_pkg_grp.init_package(self, self._filename) + chld_pkg_grp.init_package(self, self._filename, False) # remove any remaining valid kwargs key_list = list(kwargs.keys()) @@ -3264,15 +3264,26 @@ def _next_default_file_path(self): suffix += 1 return possible_path - def init_package(self, package, fname): - # clear out existing packages - self._remove_packages() + def init_package(self, package, fname, remove_packages=True): + if remove_packages: + # clear out existing packages + self._remove_packages() + elif fname is not None: + self._remove_packages(fname) if fname is None: # build a file name fname = self._next_default_file_path() package._filename = fname - # set file record variable - self._filerecord.set_data(fname, autofill=True) + # check file record variable + found = False + fr_data = self._filerecord.get_data() + if fr_data is not None: + for line in fr_data: + if line[0] == fname: + found = True + if not found: + # append file record variable + self._filerecord.append_data([(fname,)]) # add the package to the list self._packages.append(package) @@ -3319,7 +3330,11 @@ def _append_package(self, package, fname, update_frecord=True): # add the package to the list self._packages.append(package) - def _remove_packages(self): - for package in self._packages: - self._model_or_sim.remove_package(package) - self._packages = [] + def _remove_packages(self, fname=None): + rp_list = [] + for idx, package in enumerate(self._packages): + if fname is None or package.filename == fname: + self._model_or_sim.remove_package(package) + rp_list.append(idx) + for idx in reversed(rp_list): + self._packages.pop(idx) From c261ee8145e2791be2c2cc944e5a55cf19ef1560 Mon Sep 17 00:00:00 2001 From: Mike Taves Date: Mon, 24 Jul 2023 15:13:21 +1200 Subject: [PATCH 014/101] refactor(Notebooks): apply pyformat and black QA tools (#1879) --- .docs/Notebooks/array_output_tutorial.py | 10 +-- .docs/Notebooks/dis_triangle_example.py | 10 +-- .docs/Notebooks/dis_voronoi_example.py | 14 ++--- .docs/Notebooks/drain_return_example.py | 10 +-- .docs/Notebooks/export_tutorial.py | 6 +- .docs/Notebooks/export_vtk_tutorial.py | 6 +- .../external_file_handling_tutorial.py | 24 +++---- .../Notebooks/feat_working_stack_examples.py | 8 +-- .../Notebooks/get_transmissivities_example.py | 8 +-- .docs/Notebooks/grid_intersection_example.py | 8 +-- .docs/Notebooks/gridgen_example.py | 28 ++++----- .../groundwater2023_watershed_example.py | 6 +- .../Notebooks/groundwater_paper_example_1.py | 6 +- .../groundwater_paper_uspb_example.py | 6 +- .docs/Notebooks/lake_example.py | 8 +-- .docs/Notebooks/lgr_tutorial01.py | 30 ++++----- .../Notebooks/load_swr_binary_data_example.py | 8 +-- .docs/Notebooks/mf6_complex_model_example.py | 42 ++++++------- .docs/Notebooks/mf6_data_tutorial10.py | 10 +-- .docs/Notebooks/mf6_mnw2_tutorial01.py | 10 +-- .docs/Notebooks/mf6_sfr_tutorial01.py | 2 +- .docs/Notebooks/mf6_simple_model_example.py | 22 +++---- .docs/Notebooks/mf6_support_example.py | 26 ++++---- .docs/Notebooks/mf_boundaries_tutorial.py | 4 +- .docs/Notebooks/mf_error_tutorial01.py | 2 +- .docs/Notebooks/mf_load_tutorial.py | 2 +- .../mf_watertable_recharge_example.py | 6 +- .docs/Notebooks/mfusg_conduit_examples.py | 16 ++--- .docs/Notebooks/mfusg_zaidel_example.py | 6 +- .docs/Notebooks/modelgrid_examples.py | 48 ++++++-------- .../modflow_postprocessing_example.py | 20 +++--- .docs/Notebooks/modpath6_example.py | 8 +-- .../modpath7_create_simulation_example.py | 28 ++++----- .../Notebooks/modpath7_structured_example.py | 24 +++---- .../modpath7_structured_transient_example.py | 12 ++-- .../modpath7_unstructured_example.py | 28 ++++----- .../modpath7_unstructured_lateral_example.py | 2 +- .docs/Notebooks/mt3d-usgs_example.py | 38 +++++------- .docs/Notebooks/mt3dms_examples.py | 62 +++++++++---------- .../Notebooks/mt3dms_sft_lkt_uzt_tutorial.py | 8 +-- .../Notebooks/mt3dms_ssm_package_tutorial.py | 8 +-- .docs/Notebooks/nwt_option_blocks_tutorial.py | 2 +- .docs/Notebooks/pest_tutorial01.py | 14 ++--- .docs/Notebooks/plot_array_example.py | 10 +-- .docs/Notebooks/plot_cross_section_example.py | 20 +++--- .docs/Notebooks/plot_map_view_example.py | 32 +++++----- .../Notebooks/raster_intersection_example.py | 32 +++++----- .../save_binary_data_file_example.py | 6 +- .docs/Notebooks/seawat_henry_example.py | 8 +-- .docs/Notebooks/sfrpackage_example.py | 8 +-- .docs/Notebooks/shapefile_export_example.py | 30 ++++----- .docs/Notebooks/shapefile_feature_examples.py | 6 +- .docs/Notebooks/swi2package_example1.py | 10 +-- .docs/Notebooks/swi2package_example4.py | 24 +++---- .docs/Notebooks/uzf_example.py | 10 +-- .docs/Notebooks/vtk_pathlines_example.py | 2 +- .docs/Notebooks/zonebudget_example.py | 30 +++++---- 57 files changed, 424 insertions(+), 450 deletions(-) diff --git a/.docs/Notebooks/array_output_tutorial.py b/.docs/Notebooks/array_output_tutorial.py index ac790a7af9..b7780adbc3 100644 --- a/.docs/Notebooks/array_output_tutorial.py +++ b/.docs/Notebooks/array_output_tutorial.py @@ -53,9 +53,9 @@ os.makedirs(modelpth) print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") # + ml = flopy.modflow.Modflow.load( @@ -73,10 +73,10 @@ files = ["freyberg.hds", "freyberg.cbc"] for f in files: if os.path.isfile(os.path.join(modelpth, f)): - msg = "Output file located: {}".format(f) + msg = f"Output file located: {f}" print(msg) else: - errmsg = "Error. Output file cannot be found: {}".format(f) + errmsg = f"Error. Output file cannot be found: {f}" print(errmsg) # + [markdown] pycharm={"name": "#%% md\n"} diff --git a/.docs/Notebooks/dis_triangle_example.py b/.docs/Notebooks/dis_triangle_example.py index c140a9e9f8..1691f98874 100644 --- a/.docs/Notebooks/dis_triangle_example.py +++ b/.docs/Notebooks/dis_triangle_example.py @@ -37,9 +37,9 @@ workspace = Path(temp_dir.name) print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") # - # ## Creating Meshes with the Triangle Class @@ -247,8 +247,8 @@ def chdhead(x): chd = flopy.mf6.ModflowGwfchd(gwf, stress_period_data=chdlist) oc = flopy.mf6.ModflowGwfoc( gwf, - budget_filerecord="{}.cbc".format(name), - head_filerecord="{}.hds".format(name), + budget_filerecord=f"{name}.cbc", + head_filerecord=f"{name}.hds", saverecord=[("HEAD", "LAST"), ("BUDGET", "LAST")], printrecord=[("HEAD", "LAST"), ("BUDGET", "LAST")], ) diff --git a/.docs/Notebooks/dis_voronoi_example.py b/.docs/Notebooks/dis_voronoi_example.py index b4b458bb49..98a9da54c0 100644 --- a/.docs/Notebooks/dis_voronoi_example.py +++ b/.docs/Notebooks/dis_voronoi_example.py @@ -41,9 +41,9 @@ workspace = Path(temp_dir.name) print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") # - # ### Use Triangle to Generate Points for Voronoi Grid @@ -164,8 +164,8 @@ chd = flopy.mf6.ModflowGwfchd(gwf, stress_period_data=chdlist) oc = flopy.mf6.ModflowGwfoc( gwf, - budget_filerecord="{}.bud".format(name), - head_filerecord="{}.hds".format(name), + budget_filerecord=f"{name}.bud", + head_filerecord=f"{name}.hds", saverecord=[("HEAD", "ALL"), ("BUDGET", "ALL")], printrecord=[("HEAD", "LAST"), ("BUDGET", "LAST")], ) @@ -229,8 +229,8 @@ fmi = flopy.mf6.ModflowGwtfmi(gwt, packagedata=pd) oc = flopy.mf6.ModflowGwtoc( gwt, - budget_filerecord="{}.cbc".format(name), - concentration_filerecord="{}.ucn".format(name), + budget_filerecord=f"{name}.cbc", + concentration_filerecord=f"{name}.ucn", saverecord=[("CONCENTRATION", "ALL"), ("BUDGET", "ALL")], ) diff --git a/.docs/Notebooks/drain_return_example.py b/.docs/Notebooks/drain_return_example.py index dfadc6554a..c4c61de7ce 100644 --- a/.docs/Notebooks/drain_return_example.py +++ b/.docs/Notebooks/drain_return_example.py @@ -29,9 +29,9 @@ import flopy print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") # + # temporary directory @@ -78,7 +78,7 @@ raise ValueError("Failed to run.") # plot heads for the drt model -hds = flopy.utils.HeadFile(os.path.join(m.model_ws, m.name + ".hds")) +hds = flopy.utils.HeadFile(os.path.join(m.model_ws, f"{m.name}.hds")) hds.plot(colorbar=True) # remove the drt package and create a standard drain file @@ -98,7 +98,7 @@ raise ValueError("Failed to run.") # plot the heads for the model with the drain -hds = flopy.utils.HeadFile(os.path.join(m.model_ws, m.name + ".hds")) +hds = flopy.utils.HeadFile(os.path.join(m.model_ws, f"{m.name}.hds")) hds.plot(colorbar=True) try: diff --git a/.docs/Notebooks/export_tutorial.py b/.docs/Notebooks/export_tutorial.py index c87ba88add..79e4efceb7 100644 --- a/.docs/Notebooks/export_tutorial.py +++ b/.docs/Notebooks/export_tutorial.py @@ -27,7 +27,7 @@ import flopy print(sys.version) -print("flopy version: {}".format(flopy.__version__)) +print(f"flopy version: {flopy.__version__}") # - # Load our old friend...the Freyberg model @@ -60,12 +60,12 @@ pth = temp_dir.name # export the whole model (inputs and outputs) -fnc = ml.export(os.path.join(pth, ml.name + ".in.nc")) +fnc = ml.export(os.path.join(pth, f"{ml.name}.in.nc")) # export outputs using spatial reference info hds = flopy.utils.HeadFile(os.path.join(model_ws, "freyberg.hds")) flopy.export.utils.output_helper( - os.path.join(pth, ml.name + ".out.nc"), ml, {"hds": hds} + os.path.join(pth, f"{ml.name}.out.nc"), ml, {"hds": hds} ) # ### Export an array to netCDF or shapefile diff --git a/.docs/Notebooks/export_vtk_tutorial.py b/.docs/Notebooks/export_vtk_tutorial.py index 03e0f37f62..cdaaaac92a 100644 --- a/.docs/Notebooks/export_vtk_tutorial.py +++ b/.docs/Notebooks/export_vtk_tutorial.py @@ -41,7 +41,7 @@ import notebook_utils print(sys.version) -print("flopy version: {}".format(flopy.__version__)) +print(f"flopy version: {flopy.__version__}") # - # load model for examples @@ -384,10 +384,10 @@ files = ["mp7p2.hds", "mp7p2.cbb"] for f in files: if os.path.isfile(modelpth / f): - msg = "Output file located: {}".format(f) + msg = f"Output file located: {f}" print(msg) else: - errmsg = "Error. Output file cannot be found: {}".format(f) + errmsg = f"Error. Output file cannot be found: {f}" print(errmsg) # + diff --git a/.docs/Notebooks/external_file_handling_tutorial.py b/.docs/Notebooks/external_file_handling_tutorial.py index 4967aafdcd..f11e701cef 100644 --- a/.docs/Notebooks/external_file_handling_tutorial.py +++ b/.docs/Notebooks/external_file_handling_tutorial.py @@ -32,8 +32,8 @@ from flopy.utils import flopy_io print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"flopy version: {flopy.__version__}") # + # make a model @@ -57,7 +57,7 @@ vka = np.zeros_like(hk) fnames = [] for i, h in enumerate(hk): - fname = os.path.join(array_dir, "hk_{0}.ref".format(i + 1)) + fname = os.path.join(array_dir, f"hk_{i + 1}.ref") fnames.append(fname) np.savetxt(fname, h) vka[i] = i + 1 @@ -84,7 +84,7 @@ # We see that a copy of the ``hk`` files as well as the important recharge file were made in the ```model_ws```.Let's looks at the ```lpf``` file -open(os.path.join(ml.model_ws, ml.name + ".lpf"), "r").readlines()[:20] +open(os.path.join(ml.model_ws, f"{ml.name}.lpf")).readlines()[:20] # We see that the ```open/close``` approach was used - this is because ``ml.array_free_format`` is ``True``. Notice that ```vka``` is written internally @@ -137,7 +137,7 @@ vka = np.zeros_like(hk) fnames = [] for i, h in enumerate(hk): - fname = os.path.join(array_dir, "hk_{0}.ref".format(i + 1)) + fname = os.path.join(array_dir, f"hk_{i + 1}.ref") fnames.append(fname) np.savetxt(fname, h) vka[i] = i + 1 @@ -189,7 +189,7 @@ vka = np.zeros_like(hk) fnames = [] for i, h in enumerate(hk): - fname = os.path.join(array_dir, "hk_{0}.ref".format(i + 1)) + fname = os.path.join(array_dir, f"hk_{i + 1}.ref") fnames.append(fname) np.savetxt(fname, h) vka[i] = i + 1 @@ -206,16 +206,16 @@ # We see that now the external arrays are being handled through the name file. Let's look at the name file -open(os.path.join(ml.model_ws, ml.name + ".nam"), "r").readlines() +open(os.path.join(ml.model_ws, f"{ml.name}.nam")).readlines() # ### Free and binary format ml.dis.botm[0].format.binary = True ml.write_input() -open(os.path.join(ml.model_ws, ml.name + ".nam"), "r").readlines() +open(os.path.join(ml.model_ws, f"{ml.name}.nam")).readlines() -open(os.path.join(ml.model_ws, ml.name + ".dis"), "r").readlines() +open(os.path.join(ml.model_ws, f"{ml.name}.dis")).readlines() # ### The ```.how``` attribute # ```Util2d``` includes a ```.how``` attribute that gives finer grained control of how arrays will written @@ -240,11 +240,11 @@ ml.write_input() -open(os.path.join(ml.model_ws, ml.name + ".dis"), "r").readlines() +open(os.path.join(ml.model_ws, f"{ml.name}.dis")).readlines() -open(os.path.join(ml.model_ws, ml.name + ".lpf"), "r").readlines() +open(os.path.join(ml.model_ws, f"{ml.name}.lpf")).readlines() -open(os.path.join(ml.model_ws, ml.name + ".nam"), "r").readlines() +open(os.path.join(ml.model_ws, f"{ml.name}.nam")).readlines() try: # ignore PermissionError on Windows diff --git a/.docs/Notebooks/feat_working_stack_examples.py b/.docs/Notebooks/feat_working_stack_examples.py index baafb26abc..2bf6fd2104 100644 --- a/.docs/Notebooks/feat_working_stack_examples.py +++ b/.docs/Notebooks/feat_working_stack_examples.py @@ -40,10 +40,10 @@ import flopy print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("pandas version: {}".format(pd.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"pandas version: {pd.__version__}") +print(f"flopy version: {flopy.__version__}") # - # ### Model Inputs diff --git a/.docs/Notebooks/get_transmissivities_example.py b/.docs/Notebooks/get_transmissivities_example.py index eea454bc85..19203e3bf6 100644 --- a/.docs/Notebooks/get_transmissivities_example.py +++ b/.docs/Notebooks/get_transmissivities_example.py @@ -39,9 +39,9 @@ import flopy print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") # - # ### Make up some open interval tops and bottoms and some heads @@ -104,7 +104,7 @@ fig, ax = plt.subplots() plt.plot(m.dis.top.array[r, c], label="model top") for i, l in enumerate(m.dis.botm.array[:, r, c]): - label = "layer {} bot".format(i + 1) + label = f"layer {i + 1} bot" if i == m.nlay - 1: label = "model bot" plt.plot(l, label=label) diff --git a/.docs/Notebooks/grid_intersection_example.py b/.docs/Notebooks/grid_intersection_example.py index e553a9f5fc..48176f47c1 100644 --- a/.docs/Notebooks/grid_intersection_example.py +++ b/.docs/Notebooks/grid_intersection_example.py @@ -60,10 +60,10 @@ from flopy.utils import GridIntersect print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("flopy version: {}".format(flopy.__version__)) -print("shapely version: {}".format(shapely.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") +print(f"shapely version: {shapely.__version__}") # - # ## [GridIntersect Class](#top) diff --git a/.docs/Notebooks/gridgen_example.py b/.docs/Notebooks/gridgen_example.py index 55d896f1f8..c02616408d 100644 --- a/.docs/Notebooks/gridgen_example.py +++ b/.docs/Notebooks/gridgen_example.py @@ -45,9 +45,9 @@ from flopy.utils.gridgen import Gridgen print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") # - # The Flopy GRIDGEN module requires that the gridgen executable can be called using subprocess **(i.e., gridgen is in your path)**. @@ -62,9 +62,7 @@ print(msg) else: print( - "gridgen executable was found at: {}".format( - flopy_io.relpath_safe(gridgen_exe) - ) + f"gridgen executable was found at: {flopy_io.relpath_safe(gridgen_exe)}" ) # + @@ -75,8 +73,8 @@ gridgen_ws = os.path.join(model_ws, "gridgen") if not os.path.exists(gridgen_ws): os.makedirs(gridgen_ws, exist_ok=True) -print("Model workspace is : {}".format(flopy_io.scrub_login(model_ws))) -print("Gridgen workspace is : {}".format(flopy_io.scrub_login(gridgen_ws))) +print(f"Model workspace is : {flopy_io.scrub_login(model_ws)}") +print(f"Gridgen workspace is : {flopy_io.scrub_login(gridgen_ws)}") # - # ## Basic Gridgen Operations @@ -276,8 +274,8 @@ gwf, xt3doptions=True, save_specific_discharge=True ) chd = flopy.mf6.ModflowGwfchd(gwf, stress_period_data=chdspd) -budget_file = name + ".bud" -head_file = name + ".hds" +budget_file = f"{name}.bud" +head_file = f"{name}.hds" oc = flopy.mf6.ModflowGwfoc( gwf, budget_filerecord=budget_file, @@ -378,8 +376,8 @@ gwf, xt3doptions=True, save_specific_discharge=True ) chd = flopy.mf6.ModflowGwfchd(gwf, stress_period_data=chdspd) -budget_file = name + ".bud" -head_file = name + ".hds" +budget_file = f"{name}.bud" +head_file = f"{name}.hds" oc = flopy.mf6.ModflowGwfoc( gwf, budget_filerecord=budget_file, @@ -406,7 +404,7 @@ pmv.contour_array( head, levels=[0.2, 0.4, 0.6, 0.8], linewidths=3.0, vmin=vmin, vmax=vmax ) - ax.set_title("Layer {}".format(ilay + 1)) + ax.set_title(f"Layer {ilay + 1}") pmv.plot_vector(spdis["qx"], spdis["qy"], color="white") # - @@ -452,7 +450,7 @@ raise ValueError("Failed to run.") # head is returned as a list of head arrays for each layer -head_file = os.path.join(ws, name + ".hds") +head_file = os.path.join(ws, f"{name}.hds") head = flopy.utils.HeadUFile(head_file).get_data() # MODFLOW-USG does not have vertices, so we need to create @@ -472,7 +470,7 @@ pmv.plot_array(head[ilay], cmap="jet", vmin=vmin, vmax=vmax) pmv.plot_grid(colors="k", alpha=0.1) pmv.contour_array(head[ilay], levels=[0.2, 0.4, 0.6, 0.8], linewidths=3.0) - ax.set_title("Layer {}".format(ilay + 1)) + ax.set_title(f"Layer {ilay + 1}") # - try: diff --git a/.docs/Notebooks/groundwater2023_watershed_example.py b/.docs/Notebooks/groundwater2023_watershed_example.py index 947b08560e..686520926a 100644 --- a/.docs/Notebooks/groundwater2023_watershed_example.py +++ b/.docs/Notebooks/groundwater2023_watershed_example.py @@ -39,9 +39,9 @@ from flopy.utils.voronoi import VoronoiGrid print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") # import all plot style information from defaults.py sys.path.append("../common") diff --git a/.docs/Notebooks/groundwater_paper_example_1.py b/.docs/Notebooks/groundwater_paper_example_1.py index 08619db18f..97edf357b5 100644 --- a/.docs/Notebooks/groundwater_paper_example_1.py +++ b/.docs/Notebooks/groundwater_paper_example_1.py @@ -40,9 +40,9 @@ import flopy.utils as fpu print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") # - # Create a MODFLOW model object. Here, the MODFLOW model object is stored in a Python variable called {\tt model}, but this can be an arbitrary name. This object name is important as it will be used as a reference to the model in the remainder of the FloPy script. In addition, a {\tt modelname} is specified when the MODFLOW model object is created. This {\tt modelname} is used for all the files that are created by FloPy for this model. diff --git a/.docs/Notebooks/groundwater_paper_uspb_example.py b/.docs/Notebooks/groundwater_paper_uspb_example.py index 813a8b5a02..423d3bcb7e 100644 --- a/.docs/Notebooks/groundwater_paper_uspb_example.py +++ b/.docs/Notebooks/groundwater_paper_uspb_example.py @@ -36,9 +36,9 @@ import flopy print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") # - ws = os.path.join("temp") diff --git a/.docs/Notebooks/lake_example.py b/.docs/Notebooks/lake_example.py index 480330051e..fc2ebbf2f3 100644 --- a/.docs/Notebooks/lake_example.py +++ b/.docs/Notebooks/lake_example.py @@ -40,9 +40,9 @@ workspace = temp_dir.name print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") # - # We are creating a square model with a specified head equal to `h1` along all boundaries. The head at the cell in the center in the top layer is fixed to `h2`. First, set the name of the model and the parameters of the model: the number of layers `Nlay`, the number of rows and columns `N`, lengths of the sides of the model `L`, aquifer thickness `H`, hydraulic conductivity `k` @@ -109,7 +109,7 @@ # Once the model has terminated normally, we can read the heads file. First, a link to the heads file is created with `HeadFile`. The link can then be accessed with the `get_data` function, by specifying, in this case, the step number and period number for which we want to retrieve data. A three-dimensional array is returned of size `nlay, nrow, ncol`. Matplotlib contouring functions are used to make contours of the layers or a cross-section. -hds = flopy.utils.HeadFile(os.path.join(workspace, name + ".hds")) +hds = flopy.utils.HeadFile(os.path.join(workspace, f"{name}.hds")) h = hds.get_data(kstpkper=(0, 0)) x = y = np.linspace(0, L, N) c = plt.contour(x, y, h[0], np.arange(90, 100.1, 0.2)) diff --git a/.docs/Notebooks/lgr_tutorial01.py b/.docs/Notebooks/lgr_tutorial01.py index 8ca4673c57..687477ae29 100644 --- a/.docs/Notebooks/lgr_tutorial01.py +++ b/.docs/Notebooks/lgr_tutorial01.py @@ -42,9 +42,9 @@ os.makedirs(workspace, exist_ok=True) print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") # - # ## LGR basics @@ -256,8 +256,8 @@ chd = flopy.mf6.ModflowGwfchd(gwfp, stress_period_data=chdspd) oc = flopy.mf6.ModflowGwfoc( gwfp, - budget_filerecord=pname + ".bud", - head_filerecord=pname + ".hds", + budget_filerecord=f"{pname}.bud", + head_filerecord=f"{pname}.hds", saverecord=[("HEAD", "ALL"), ("BUDGET", "ALL")], ) @@ -270,8 +270,8 @@ npf = flopy.mf6.ModflowGwfnpf(gwfc, save_specific_discharge=True) oc = flopy.mf6.ModflowGwfoc( gwfc, - budget_filerecord=cname + ".bud", - head_filerecord=cname + ".hds", + budget_filerecord=f"{cname}.bud", + head_filerecord=f"{cname}.hds", saverecord=[("HEAD", "ALL"), ("BUDGET", "ALL")], ) @@ -436,8 +436,8 @@ chd = flopy.mf6.ModflowGwfchd(gwfp, stress_period_data=chdspd) oc = flopy.mf6.ModflowGwfoc( gwfp, - budget_filerecord=pname + ".bud", - head_filerecord=pname + ".hds", + budget_filerecord=f"{pname}.bud", + head_filerecord=f"{pname}.hds", saverecord=[("HEAD", "ALL"), ("BUDGET", "ALL")], ) @@ -450,8 +450,8 @@ npf = flopy.mf6.ModflowGwfnpf(gwfc, save_specific_discharge=True) oc = flopy.mf6.ModflowGwfoc( gwfc, - budget_filerecord=cname + ".bud", - head_filerecord=cname + ".hds", + budget_filerecord=f"{cname}.bud", + head_filerecord=f"{cname}.hds", saverecord=[("HEAD", "ALL"), ("BUDGET", "ALL")], ) # - @@ -495,8 +495,8 @@ ssm = flopy.mf6.ModflowGwtssm(gwtp) oc = flopy.mf6.ModflowGwtoc( gwtp, - budget_filerecord=pname + ".bud", - concentration_filerecord=pname + ".ucn", + budget_filerecord=f"{pname}.bud", + concentration_filerecord=f"{pname}.ucn", saverecord=[("CONCENTRATION", "ALL"), ("BUDGET", "ALL")], ) @@ -514,8 +514,8 @@ cnc = flopy.mf6.ModflowGwtcnc(gwtc, stress_period_data=cncspd) oc = flopy.mf6.ModflowGwtoc( gwtc, - budget_filerecord=cname + ".bud", - concentration_filerecord=cname + ".ucn", + budget_filerecord=f"{cname}.bud", + concentration_filerecord=f"{cname}.ucn", saverecord=[("CONCENTRATION", "ALL"), ("BUDGET", "ALL")], printrecord=[("CONCENTRATION", "LAST"), ("BUDGET", "ALL")], ) diff --git a/.docs/Notebooks/load_swr_binary_data_example.py b/.docs/Notebooks/load_swr_binary_data_example.py index 0a286e0564..499e612518 100644 --- a/.docs/Notebooks/load_swr_binary_data_example.py +++ b/.docs/Notebooks/load_swr_binary_data_example.py @@ -37,9 +37,9 @@ import flopy print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") # + # Set the paths @@ -202,7 +202,7 @@ edgecolor="none", step="pre", ) - ax.set_title("{} hours".format(times[v])) + ax.set_title(f"{times[v]} hours") ax.set_ylim(-0.5, 4.5) # ## Summary diff --git a/.docs/Notebooks/mf6_complex_model_example.py b/.docs/Notebooks/mf6_complex_model_example.py index 51f21f2d50..5b6da56ea5 100644 --- a/.docs/Notebooks/mf6_complex_model_example.py +++ b/.docs/Notebooks/mf6_complex_model_example.py @@ -39,9 +39,9 @@ import flopy print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") # - # For this example, we will set up a temporary workspace. @@ -75,7 +75,7 @@ # create gwf model gwf = flopy.mf6.ModflowGwf( - sim, modelname=model_name, model_nam_file="{}.nam".format(model_name) + sim, modelname=model_name, model_nam_file=f"{model_name}.nam" ) gwf.name_file.save_flows = True @@ -114,12 +114,12 @@ delc=500.0, top=50.0, botm=[5.0, -10.0, botlay2], - filename="{}.dis".format(model_name), + filename=f"{model_name}.dis", ) # initial conditions ic = flopy.mf6.ModflowGwfic( - gwf, pname="ic", strt=50.0, filename="{}.ic".format(model_name) + gwf, pname="ic", strt=50.0, filename=f"{model_name}.ic" ) # node property flow @@ -136,8 +136,8 @@ oc = flopy.mf6.ModflowGwfoc( gwf, pname="oc", - budget_filerecord="{}.cbb".format(model_name), - head_filerecord="{}.hds".format(model_name), + budget_filerecord=f"{model_name}.cbb", + head_filerecord=f"{model_name}.hds", headprintrecord=[("COLUMNS", 10, "WIDTH", 15, "DIGITS", 6, "GENERAL")], saverecord=[("HEAD", "ALL"), ("BUDGET", "ALL")], printrecord=[("HEAD", "FIRST"), ("HEAD", "LAST"), ("BUDGET", "LAST")], @@ -274,7 +274,7 @@ stress_period_data=ghb_period, ) ts_recarray = [] -fd = open(os.path.join(data_pth, "tides.txt"), "r") +fd = open(os.path.join(data_pth, "tides.txt")) for line in fd: line_list = line.strip().split(",") ts_recarray.append((float(line_list[0]), float(line_list[1]))) @@ -295,7 +295,7 @@ ], } ghb.obs.initialize( - filename="{}.ghb.obs".format(model_name), + filename=f"{model_name}.ghb.obs", print_input=True, continuous=obs_recarray, ) @@ -316,7 +316,7 @@ obs_package = flopy.mf6.ModflowUtlobs( gwf, pname="head_obs", - filename="{}.obs".format(model_name), + filename=f"{model_name}.obs", print_input=True, continuous=obs_recarray, ) @@ -351,7 +351,7 @@ pname="riv", print_input=True, print_flows=True, - save_flows="{}.cbc".format(model_name), + save_flows=f"{model_name}.cbc", boundnames=True, maxbound=20, stress_period_data=riv_period, @@ -407,7 +407,7 @@ ], } riv.obs.initialize( - filename="{}.riv.obs".format(model_name), + filename=f"{model_name}.riv.obs", print_input=True, continuous=obs_recarray, ) @@ -443,7 +443,7 @@ rch1_period[0] = rch1_period_array rch1 = flopy.mf6.ModflowGwfrch( gwf, - filename="{}_1.rch".format(model_name), + filename=f"{model_name}_1.rch", pname="rch_1", fixed_cell=True, auxiliary="MULTIPLIER", @@ -496,7 +496,7 @@ rch2_period[0] = rch2_period_array rch2 = flopy.mf6.ModflowGwfrch( gwf, - filename="{}_2.rch".format(model_name), + filename=f"{model_name}_2.rch", pname="rch_2", fixed_cell=True, auxiliary="MULTIPLIER", @@ -544,7 +544,7 @@ rch3_period[0] = rch3_period_array rch3 = flopy.mf6.ModflowGwfrch( gwf, - filename="{}_3.rch".format(model_name), + filename=f"{model_name}_3.rch", pname="rch_3", fixed_cell=True, auxiliary="MULTIPLIER", @@ -616,7 +616,7 @@ # # MODFLOW 6 writes a binary grid file, which contains information about the model grid. MODFLOW 6 also writes a binary budget file, which contains flow information. Both of these files can be read using FloPy methods. The `MfGrdFile` class in FloPy can be used to read the binary grid file, which contains the cell connectivity (`ia` and `ja`). The `output.budget()` method in FloPy can be used to read the binary budget file written by MODFLOW 6. -fname = os.path.join(workspace, "{}.dis.grb".format(model_name)) +fname = os.path.join(workspace, f"{model_name}.dis.grb") bgf = flopy.mf6.utils.MfGrdFile(fname) ia, ja = bgf.ia, bgf.ja @@ -632,14 +632,10 @@ cell_nodes = gwf.modelgrid.get_node([(k, i, j)]) for celln in cell_nodes: - print("Printing flows for cell {}".format(celln)) + print(f"Printing flows for cell {celln}") for ipos in range(ia[celln] + 1, ia[celln + 1]): cellm = ja[ipos] - print( - "Cell {} flow with cell {} is {}".format( - celln, cellm, flowja[ipos] - ) - ) + print(f"Cell {celln} flow with cell {cellm} is {flowja[ipos]}") # - # ### Post-Process Head Observations diff --git a/.docs/Notebooks/mf6_data_tutorial10.py b/.docs/Notebooks/mf6_data_tutorial10.py index 92f11d672c..049332286f 100644 --- a/.docs/Notebooks/mf6_data_tutorial10.py +++ b/.docs/Notebooks/mf6_data_tutorial10.py @@ -46,8 +46,8 @@ model_name = "child_pkgs" print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"flopy version: {flopy.__version__}") # + # create simulation @@ -60,7 +60,7 @@ sim, time_units="DAYS", nper=4, perioddata=tdis_rc ) model = flopy.mf6.ModflowGwf( - sim, modelname=model_name, model_nam_file="{}.nam".format(model_name) + sim, modelname=model_name, model_nam_file=f"{model_name}.nam" ) ims_package = flopy.mf6.modflow.mfims.ModflowIms( sim, @@ -88,10 +88,10 @@ delc=500.0, top=50.0, botm=[5.0, -10.0, {"factor": 1.0, "data": bot_data}], - filename="{}.dis".format(model_name), + filename=f"{model_name}.dis", ) ic_package = flopy.mf6.modflow.mfgwfic.ModflowGwfic( - model, strt=50.0, filename="{}.ic".format(model_name) + model, strt=50.0, filename=f"{model_name}.ic" ) npf_package = flopy.mf6.modflow.mfgwfnpf.ModflowGwfnpf( model, diff --git a/.docs/Notebooks/mf6_mnw2_tutorial01.py b/.docs/Notebooks/mf6_mnw2_tutorial01.py index e7aa2dbf18..f07e4ec381 100644 --- a/.docs/Notebooks/mf6_mnw2_tutorial01.py +++ b/.docs/Notebooks/mf6_mnw2_tutorial01.py @@ -38,12 +38,12 @@ import flopy print(sys.version) -print("numpy version: {}".format(np.__version__)) +print(f"numpy version: {np.__version__}") try: - print("pandas version: {}".format(pd.__version__)) + print(f"pandas version: {pd.__version__}") except: pass -print("flopy version: {}".format(flopy.__version__)) +print(f"flopy version: {flopy.__version__}") # - # ### Make an MNW2 package from scratch @@ -221,7 +221,9 @@ path = os.path.join("..", "..", "examples", "data", "mnw2_examples") m = flopy.modflow.Modflow("br", model_ws=model_ws) -mnw2 = flopy.modflow.ModflowMnw2.load(path + "/BadRiver_cal.mnw2", m) +mnw2 = flopy.modflow.ModflowMnw2.load( + os.path.join(path, "BadRiver_cal.mnw2"), m +) df = pd.DataFrame(mnw2.node_data) df.loc[:, df.sum(axis=0) != 0] diff --git a/.docs/Notebooks/mf6_sfr_tutorial01.py b/.docs/Notebooks/mf6_sfr_tutorial01.py index e609388118..41ce0d2500 100644 --- a/.docs/Notebooks/mf6_sfr_tutorial01.py +++ b/.docs/Notebooks/mf6_sfr_tutorial01.py @@ -30,7 +30,7 @@ import flopy print(sys.version) -print("flopy version: {}".format(flopy.__version__)) +print(f"flopy version: {flopy.__version__}") # - # Create a model instance diff --git a/.docs/Notebooks/mf6_simple_model_example.py b/.docs/Notebooks/mf6_simple_model_example.py index 730a4a5479..8c00e30caa 100644 --- a/.docs/Notebooks/mf6_simple_model_example.py +++ b/.docs/Notebooks/mf6_simple_model_example.py @@ -39,9 +39,9 @@ import flopy print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") # - # For this example, we will set up a temporary workspace. @@ -83,7 +83,7 @@ ) # Create the Flopy groundwater flow (gwf) model object -model_nam_file = "{}.nam".format(name) +model_nam_file = f"{name}.nam" gwf = flopy.mf6.ModflowGwf(sim, modelname=name, model_nam_file=model_nam_file) # Create the Flopy iterative model solver (ims) Package object @@ -159,13 +159,13 @@ ilay = 0 plt.imshow(ibd[ilay, :, :], interpolation="none") -plt.title("Layer {}: Constant Head Cells".format(ilay + 1)) +plt.title(f"Layer {ilay + 1}: Constant Head Cells") # - # Create the output control package -headfile = "{}.hds".format(name) +headfile = f"{name}.hds" head_filerecord = [headfile] -budgetfile = "{}.cbb".format(name) +budgetfile = f"{name}.cbb" budget_filerecord = [budgetfile] saverecord = [("HEAD", "ALL"), ("BUDGET", "ALL")] printrecord = [("HEAD", "LAST")] @@ -271,7 +271,7 @@ # + # read the binary grid file -fname = os.path.join(workspace, "{}.dis.grb".format(name)) +fname = os.path.join(workspace, f"{name}.dis.grb") bgf = flopy.mf6.utils.MfGrdFile(fname) # data read from the binary grid file is stored in a dictionary @@ -279,7 +279,7 @@ # + # read the cell budget file -fname = os.path.join(workspace, "{}.cbb".format(name)) +fname = os.path.join(workspace, f"{name}.cbb") cbb = flopy.utils.CellBudgetFile(fname, precision="double") cbb.list_records() @@ -294,10 +294,10 @@ j = 50 celln = k * N * N + i * N + j ia, ja = bgf.ia, bgf.ja -print("Printing flows for cell {}".format(celln)) +print(f"Printing flows for cell {celln}") for ipos in range(ia[celln] + 1, ia[celln + 1]): cellm = ja[ipos] - print("Cell {} flow with cell {} is {}".format(celln, cellm, flowja[ipos])) + print(f"Cell {celln} flow with cell {cellm} is {flowja[ipos]}") try: # ignore PermissionError on Windows diff --git a/.docs/Notebooks/mf6_support_example.py b/.docs/Notebooks/mf6_support_example.py index 3466906853..957b0a743a 100644 --- a/.docs/Notebooks/mf6_support_example.py +++ b/.docs/Notebooks/mf6_support_example.py @@ -87,7 +87,7 @@ model_name = "example_model" model = flopy.mf6.ModflowGwf( - sim, modelname=model_name, model_nam_file="{}.nam".format(model_name) + sim, modelname=model_name, model_nam_file=f"{model_name}.nam" ) # Next create one or more Iterative Model Solution (IMS) files. @@ -129,7 +129,7 @@ delc=500.0, top=100.0, botm=[50.0, 20.0], - filename="{}.dis".format(model_name), + filename=f"{model_name}.dis", ) # ## Accessing namefiles @@ -260,7 +260,7 @@ "binary": "True", } ic_package = flopy.mf6.ModflowGwfic( - model, pname="ic", strt=strt, filename="{}.ic".format(model_name) + model, pname="ic", strt=strt, filename=f"{model_name}.ic" ) # move external file data into model folder icv_data_path = os.path.join( @@ -380,8 +380,8 @@ oc_package = flopy.mf6.ModflowGwfoc( model, pname="oc", - budget_filerecord=[("{}.cbc".format(model_name),)], - head_filerecord=[("{}.hds".format(model_name),)], + budget_filerecord=[(f"{model_name}.cbc",)], + head_filerecord=[(f"{model_name}.hds",)], saverecord=saverec_dict, printrecord=printrec_tuple_list, ) @@ -547,17 +547,17 @@ # + # get the data without applying any factor hk_data_no_factor = hk.get_data() -print("Data without factor:\n{}\n".format(hk_data_no_factor)) +print(f"Data without factor:\n{hk_data_no_factor}\n") # get data with factor applied hk_data_factor = hk.array -print("Data with factor:\n{}\n".format(hk_data_factor)) +print(f"Data with factor:\n{hk_data_factor}\n") # - # Data can also be retrieved from the data object using []. For unlayered data the [] can be used to slice the data. # slice layer one row two -print("SY slice of layer on row two\n{}\n".format(sy[0, :, 2])) +print(f"SY slice of layer on row two\n{sy[0, :, 2]}\n") # For layered data specify the layer number within the brackets. This will return a "LayerStorage" object which let's you change attributes of an individual layer. @@ -566,8 +566,8 @@ # change the print code and factor for layer one hk_layer_one.iprn = "2" hk_layer_one.factor = 1.1 -print("Layer one data without factor:\n{}\n".format(hk_layer_one.get_data())) -print("Data with new factor:\n{}\n".format(hk.array)) +print(f"Layer one data without factor:\n{hk_layer_one.get_data()}\n") +print(f"Data with new factor:\n{hk.array}\n") # ## Modifying Data # @@ -577,13 +577,13 @@ hk_layer_one.set_data( [120.0, 100.0, 80.0, 70.0, 60.0, 50.0, 40.0, 30.0, 25.0, 20.0] ) -print("New HK data no factor:\n{}\n".format(hk.get_data())) +print(f"New HK data no factor:\n{hk.get_data()}\n") # set data attribute to new data ic_package.strt = 150.0 -print("New strt values:\n{}\n".format(ic_package.strt.array)) +print(f"New strt values:\n{ic_package.strt.array}\n") # call set_data sto_package.ss.set_data([0.000003, 0.000004]) -print("New ss values:\n{}\n".format(sto_package.ss.array)) +print(f"New ss values:\n{sto_package.ss.array}\n") # ## Modifying the Simulation Path # diff --git a/.docs/Notebooks/mf_boundaries_tutorial.py b/.docs/Notebooks/mf_boundaries_tutorial.py index 121a1b34a5..44dbb940c2 100644 --- a/.docs/Notebooks/mf_boundaries_tutorial.py +++ b/.docs/Notebooks/mf_boundaries_tutorial.py @@ -52,8 +52,8 @@ workspace = os.path.join(temp_dir.name) print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"flopy version: {flopy.__version__}") # - # ## List of Boundaries diff --git a/.docs/Notebooks/mf_error_tutorial01.py b/.docs/Notebooks/mf_error_tutorial01.py index 0f140f5000..63365aac52 100644 --- a/.docs/Notebooks/mf_error_tutorial01.py +++ b/.docs/Notebooks/mf_error_tutorial01.py @@ -30,7 +30,7 @@ import flopy print(sys.version) -print("flopy version: {}".format(flopy.__version__)) +print(f"flopy version: {flopy.__version__}") # - # #### Set the working directory diff --git a/.docs/Notebooks/mf_load_tutorial.py b/.docs/Notebooks/mf_load_tutorial.py index 4395bac9c5..6f787a599e 100644 --- a/.docs/Notebooks/mf_load_tutorial.py +++ b/.docs/Notebooks/mf_load_tutorial.py @@ -32,7 +32,7 @@ import flopy print(sys.version) -print("flopy version: {}".format(flopy.__version__)) +print(f"flopy version: {flopy.__version__}") # - # ## The `load()` method diff --git a/.docs/Notebooks/mf_watertable_recharge_example.py b/.docs/Notebooks/mf_watertable_recharge_example.py index 2438718a63..814e2f6faa 100644 --- a/.docs/Notebooks/mf_watertable_recharge_example.py +++ b/.docs/Notebooks/mf_watertable_recharge_example.py @@ -39,9 +39,9 @@ import flopy print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") # - # Set name of MODFLOW exe diff --git a/.docs/Notebooks/mfusg_conduit_examples.py b/.docs/Notebooks/mfusg_conduit_examples.py index 5bb2b6c3cb..8d5b5d2a50 100644 --- a/.docs/Notebooks/mfusg_conduit_examples.py +++ b/.docs/Notebooks/mfusg_conduit_examples.py @@ -274,7 +274,7 @@ raise ValueError("Failed to run.") # + -head_file = os.path.join(mf.model_ws, modelname + ".clnhd") +head_file = os.path.join(mf.model_ws, f"{modelname}.clnhd") headobj = flopy.utils.HeadFile(head_file) simtimes = headobj.get_times() @@ -298,7 +298,7 @@ ax.legend() # + -cbb_file = os.path.join(mf.model_ws, modelname + ".clncb") +cbb_file = os.path.join(mf.model_ws, f"{modelname}.clncb") cbb = flopy.utils.CellBudgetFile(cbb_file) # cbb.list_records() @@ -371,7 +371,7 @@ raise ValueError("Failed to run.") # + -head_file = os.path.join(mf.model_ws, modelname + ".clnhd") +head_file = os.path.join(mf.model_ws, f"{modelname}.clnhd") headobj = flopy.utils.HeadFile(head_file) simtimes = headobj.get_times() @@ -393,7 +393,7 @@ ax.set_title("MODFLOW USG Ex3b Conduit Unconfined") # + -cbb_file = os.path.join(mf.model_ws, modelname + ".clncb") +cbb_file = os.path.join(mf.model_ws, f"{modelname}.clncb") cbb = flopy.utils.CellBudgetFile(cbb_file) # cbb.list_records() @@ -467,7 +467,7 @@ raise ValueError("Failed to run.") # + -head_file = os.path.join(mf.model_ws, modelname + ".clnhd") +head_file = os.path.join(mf.model_ws, f"{modelname}.clnhd") headobj = flopy.utils.HeadFile(head_file) simtimes = headobj.get_times() @@ -491,7 +491,7 @@ ax.legend() # + -cbb_file = os.path.join(mf.model_ws, modelname + ".clncb") +cbb_file = os.path.join(mf.model_ws, f"{modelname}.clncb") cbb = flopy.utils.CellBudgetFile(cbb_file) # cbb.list_records() @@ -562,7 +562,7 @@ raise ValueError("Failed to run.") # + -head_file = os.path.join(mf.model_ws, modelname + ".clnhd") +head_file = os.path.join(mf.model_ws, f"{modelname}.clnhd") headobj = flopy.utils.HeadFile(head_file) simtimes = headobj.get_times() @@ -576,7 +576,7 @@ head_case4 = np.squeeze(simhead) # + -cbb_file = os.path.join(mf.model_ws, modelname + ".clncb") +cbb_file = os.path.join(mf.model_ws, f"{modelname}.clncb") cbb = flopy.utils.CellBudgetFile(cbb_file) # cbb.list_records() diff --git a/.docs/Notebooks/mfusg_zaidel_example.py b/.docs/Notebooks/mfusg_zaidel_example.py index b20c710250..61d961412b 100644 --- a/.docs/Notebooks/mfusg_zaidel_example.py +++ b/.docs/Notebooks/mfusg_zaidel_example.py @@ -39,9 +39,9 @@ import flopy print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") # - # Set temporary workspace and MODFLOW executable diff --git a/.docs/Notebooks/modelgrid_examples.py b/.docs/Notebooks/modelgrid_examples.py index da0fc86580..ffb6541599 100644 --- a/.docs/Notebooks/modelgrid_examples.py +++ b/.docs/Notebooks/modelgrid_examples.py @@ -51,9 +51,9 @@ from flopy.discretization import StructuredGrid, UnstructuredGrid, VertexGrid print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") # - # set the names of our modflow executables @@ -147,9 +147,7 @@ proj4 = modelgrid.proj4 print( - "xoff: {}\nyoff: {}\nangrot: {}\nepsg: {}\nproj4: {}".format( - xoff, yoff, angrot, epsg, proj4 - ) + f"xoff: {xoff}\nyoff: {yoff}\nangrot: {angrot}\nepsg: {epsg}\nproj4: {proj4}" ) # - @@ -159,14 +157,12 @@ # + # show the coordinate info before setting it -print("Before: {}\n".format(modelgrid1)) +print(f"Before: {modelgrid1}\n") # set reference infromation -modelgrid1.set_coord_info( - xoff=xoff, yoff=yoff, angrot=angrot, crs=epsg -) +modelgrid1.set_coord_info(xoff=xoff, yoff=yoff, angrot=angrot, crs=epsg) -print("After: {}".format(modelgrid1)) +print(f"After: {modelgrid1}") # - # *__The user can also set individual parts of the coordinate information if they do not want to supply all of the fields__* @@ -196,7 +192,7 @@ modelgrid1.set_coord_info(angrot=rotation[ix]) pmv = flopy.plot.PlotMapView(modelgrid=modelgrid1, ax=ax) pmv.plot_grid() - ax.set_title("Modelgrid: {} degrees rotation".format(rotation[ix])) + ax.set_title(f"Modelgrid: {rotation[ix]} degrees rotation") # - # The grid lines can also be plotted directly from the modelgrid object @@ -443,7 +439,7 @@ def load_verts(fname): def load_iverts(fname): - f = open(fname, "r") + f = open(fname) iverts = [] xc = [] yc = [] @@ -501,8 +497,8 @@ def load_iverts(fname): ncpl = np.array(3 * [len(iverts)]) nnodes = np.sum(ncpl) -top = np.ones((nnodes)) -botm = np.ones((nnodes)) +top = np.ones(nnodes) +botm = np.ones(nnodes) # set top and botm elevations i0 = 0 @@ -604,9 +600,9 @@ def load_iverts(fname): # + # print grid info properties if modelgrid.is_valid and modelgrid.is_complete: - print("{} modelgrid is valid and complete\n".format(modelgrid.grid_type)) + print(f"{modelgrid.grid_type} modelgrid is valid and complete\n") -print("lenuni: {}, units: {}\n".format(modelgrid.lenuni, modelgrid.units)) +print(f"lenuni: {modelgrid.lenuni}, units: {modelgrid.units}\n") print( "lower left corner: ({0}, {2}), upper right corner: ({1}, {3})".format( @@ -626,15 +622,11 @@ def load_iverts(fname): # - `prjfile` : returns the path to the modelgrid projection file if it is set # Access and print some of these properties +print(f"xoffset: {modelgrid.xoffset}, yoffset: {modelgrid.yoffset}\n") print( - "xoffset: {}, yoffset: {}\n".format(modelgrid.xoffset, modelgrid.yoffset) -) -print( - "rotation (deg): {:.1f}, (rad): {:.4f}\n".format( - modelgrid.angrot, modelgrid.angrot_radians - ) + f"rotation (deg): {modelgrid.angrot:.1f}, (rad): {modelgrid.angrot_radians:.4f}\n" ) -print("proj4_str: {}".format(modelgrid.proj4)) +print(f"proj4_str: {modelgrid.proj4}") # #### Model discretization properties # @@ -656,9 +648,9 @@ def load_iverts(fname): # + # look at some model discretization information -print("Grid shape: {}\n".format(modelgrid.shape)) -print("number of cells per layer: {}\n".format(modelgrid.ncpl)) -print("number of cells in model: {}".format(modelgrid.nnodes)) +print(f"Grid shape: {modelgrid.shape}\n") +print(f"number of cells per layer: {modelgrid.ncpl}\n") +print(f"number of cells in model: {modelgrid.nnodes}") # + # plot the model cell vertices and cell centers @@ -701,7 +693,7 @@ def load_iverts(fname): ) pmv.plot_grid() pmv.plot_inactive() - ax.set_title("Modelgrid: {}".format(labels[ix])) + ax.set_title(f"Modelgrid: {labels[ix]}") plt.colorbar(pc) # - diff --git a/.docs/Notebooks/modflow_postprocessing_example.py b/.docs/Notebooks/modflow_postprocessing_example.py index 82c54f6c32..98259ce3aa 100644 --- a/.docs/Notebooks/modflow_postprocessing_example.py +++ b/.docs/Notebooks/modflow_postprocessing_example.py @@ -37,9 +37,9 @@ ) print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") # + mfnam = "EXAMPLE.nam" @@ -68,18 +68,18 @@ grid = m.modelgrid for i, hdslayer in enumerate(hds): im = axes[i].imshow(hdslayer, vmin=hds.min(), vmax=hds.max()) - axes[i].set_title("Layer {}".format(i + 1)) + axes[i].set_title(f"Layer {i + 1}") ctr = axes[i].contour(hdslayer, colors="k", linewidths=0.5) # export head rasters # (GeoTiff export requires the rasterio package; for ascii grids, just change the extension to *.asc) flopy.export.utils.export_array( - grid, os.path.join(workspace, "heads{}.tif".format(i + 1)), hdslayer + grid, os.path.join(workspace, f"heads{i + 1}.tif"), hdslayer ) # export head contours to a shapefile flopy.export.utils.export_array_contours( - grid, os.path.join(workspace, "heads{}.shp".format(i + 1)), hdslayer + grid, os.path.join(workspace, f"heads{i + 1}.shp"), hdslayer ) fig.delaxes(axes[-1]) @@ -100,9 +100,7 @@ grid, os.path.join(workspace, "heads5_rot.tif"), hdslayer, nodata=nodata ) -results = np.loadtxt( - os.path.join(workspace, "heads5_rot.asc".format(i + 1)), skiprows=6 -) +results = np.loadtxt(os.path.join(workspace, f"heads5_rot.asc"), skiprows=6) results[results == nodata] = np.nan plt.imshow(results) plt.colorbar() @@ -128,9 +126,7 @@ for i, vertical_gradient in enumerate(grad): im = axes[i].imshow(vertical_gradient, vmin=grad.min(), vmax=grad.max()) - axes[i].set_title( - "Vertical gradient\nbetween Layers {} and {}".format(i + 1, i + 2) - ) + axes[i].set_title(f"Vertical gradient\nbetween Layers {i + 1} and {i + 2}") ctr = axes[i].contour( vertical_gradient, levels=[-0.1, -0.05, 0.0, 0.05, 0.1], diff --git a/.docs/Notebooks/modpath6_example.py b/.docs/Notebooks/modpath6_example.py index 286820479e..018d9a649f 100644 --- a/.docs/Notebooks/modpath6_example.py +++ b/.docs/Notebooks/modpath6_example.py @@ -43,10 +43,10 @@ import flopy print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("pandas version: {}".format(pd.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"pandas version: {pd.__version__}") +print(f"flopy version: {flopy.__version__}") # - # Load the MODFLOW model, then switch to a temporary working directory. diff --git a/.docs/Notebooks/modpath7_create_simulation_example.py b/.docs/Notebooks/modpath7_create_simulation_example.py index 4dc05b9114..4e1d9644f0 100644 --- a/.docs/Notebooks/modpath7_create_simulation_example.py +++ b/.docs/Notebooks/modpath7_create_simulation_example.py @@ -40,9 +40,9 @@ import flopy print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") # temporary directory temp_dir = TemporaryDirectory() @@ -98,7 +98,7 @@ def get_nodes(locs): ) # Create the Flopy groundwater flow (gwf) model object -model_nam_file = "{}.nam".format(nm) +model_nam_file = f"{nm}.nam" gwf = flopy.mf6.ModflowGwf( sim, modelname=nm, model_nam_file=model_nam_file, save_flows=True ) @@ -148,9 +148,9 @@ def get_nodes(locs): rd.append([(0, i, ncol - 1), riv_h, riv_c, riv_z]) flopy.mf6.modflow.mfgwfriv.ModflowGwfriv(gwf, stress_period_data={0: rd}) # Create the output control package -headfile = "{}.hds".format(nm) +headfile = f"{nm}.hds" head_record = [headfile] -budgetfile = "{}.cbb".format(nm) +budgetfile = f"{nm}.cbb" budget_record = [budgetfile] saverecord = [("HEAD", "ALL"), ("BUDGET", "ALL")] oc = flopy.mf6.modflow.mfgwfoc.ModflowGwfoc( @@ -182,7 +182,7 @@ def get_nodes(locs): # + # create modpath files -mpnamf = nm + "_mp_forward" +mpnamf = f"{nm}_mp_forward" # create basic forward tracking modpath simulation mp = flopy.modpath.Modpath7.create_mp7( @@ -210,7 +210,7 @@ def get_nodes(locs): # + # create modpath files -mpnamb = nm + "_mp_backward" +mpnamb = f"{nm}_mp_backward" # create basic forward tracking modpath simulation mp = flopy.modpath.Modpath7.create_mp7( @@ -241,14 +241,14 @@ def get_nodes(locs): # Load forward tracking pathline data -fpth = os.path.join(ws, mpnamf + ".mppth") +fpth = os.path.join(ws, f"{mpnamf}.mppth") p = flopy.utils.PathlineFile(fpth) pw = p.get_destination_pathline_data(dest_cells=nodew) pr = p.get_destination_pathline_data(dest_cells=nodesr) # Load forward tracking endpoint data -fpth = os.path.join(ws, mpnamf + ".mpend") +fpth = os.path.join(ws, f"{mpnamf}.mpend") e = flopy.utils.EndpointFile(fpth) # Get forward particles that terminate in the well @@ -273,7 +273,7 @@ def get_nodes(locs): for k in range(nlay): ax = axes[idax] ax.set_aspect("equal") - ax.set_title("Well pathlines - Layer {}".format(k + 1)) + ax.set_title(f"Well pathlines - Layer {k + 1}") mm = flopy.plot.PlotMapView(model=gwf, ax=ax) mm.plot_grid(lw=0.5) mm.plot_pathline(pw, layer=k, colors=colors[k], lw=0.75) @@ -282,7 +282,7 @@ def get_nodes(locs): for k in range(nlay): ax = axes[idax] ax.set_aspect("equal") - ax.set_title("River pathlines - Layer {}".format(k + 1)) + ax.set_title(f"River pathlines - Layer {k + 1}") mm = flopy.plot.PlotMapView(model=gwf, ax=ax) mm.plot_grid(lw=0.5) mm.plot_pathline(pr, layer=k, colors=colors[k], lw=0.75) @@ -316,14 +316,14 @@ def get_nodes(locs): # # Load backward tracking pathlines -fpth = os.path.join(ws, mpnamb + ".mppth") +fpth = os.path.join(ws, f"{mpnamb}.mppth") p = flopy.utils.PathlineFile(fpth) pwb = p.get_destination_pathline_data(dest_cells=nodew) prb = p.get_destination_pathline_data(dest_cells=nodesr) # Load backward tracking endpoints -fpth = os.path.join(ws, mpnamb + ".mpend") +fpth = os.path.join(ws, f"{mpnamb}.mpend") e = flopy.utils.EndpointFile(fpth) ewb = e.get_destination_endpoint_data(dest_cells=nodew, source=True) erb = e.get_destination_endpoint_data(dest_cells=nodesr, source=True) diff --git a/.docs/Notebooks/modpath7_structured_example.py b/.docs/Notebooks/modpath7_structured_example.py index 6bb638f0fe..ebf561190f 100644 --- a/.docs/Notebooks/modpath7_structured_example.py +++ b/.docs/Notebooks/modpath7_structured_example.py @@ -40,9 +40,9 @@ import flopy print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") # temporary directory temp_dir = TemporaryDirectory() @@ -176,7 +176,7 @@ # create modpath files exe_name = "mp7" mp = flopy.modpath.Modpath7( - modelname=nm + "_mp", flowmodel=m, exe_name=exe_name, model_ws=ws + modelname=f"{nm}_mp", flowmodel=m, exe_name=exe_name, model_ws=ws ) mpbas = flopy.modpath.Modpath7Bas(mp, porosity=0.1, defaultiface=defaultiface) mpsim = flopy.modpath.Modpath7Sim( @@ -216,7 +216,7 @@ # Pathline data -fpth = os.path.join(ws, nm + "_mp.mppth") +fpth = os.path.join(ws, f"{nm}_mp.mppth") p = flopy.utils.PathlineFile(fpth) pw0 = p.get_destination_pathline_data(nodew, to_recarray=True) pr0 = p.get_destination_pathline_data(nodesr, to_recarray=True) @@ -225,7 +225,7 @@ # # Get particles that terminate in the well -fpth = os.path.join(ws, nm + "_mp.mpend") +fpth = os.path.join(ws, f"{nm}_mp.mpend") e = flopy.utils.EndpointFile(fpth) well_epd = e.get_destination_endpoint_data(dest_cells=nodew) well_epd.shape @@ -270,7 +270,7 @@ ) # Create the Flopy groundwater flow (gwf) model object -model_nam_file = "{}.nam".format(nm) +model_nam_file = f"{nm}.nam" gwf = flopy.mf6.ModflowGwf( sim, modelname=nm, model_nam_file=model_nam_file, save_flows=True ) @@ -320,9 +320,9 @@ rd.append([(0, i, ncol - 1), riv_h, riv_c, riv_z]) flopy.mf6.modflow.mfgwfriv.ModflowGwfriv(gwf, stress_period_data={0: rd}) # Create the output control package -headfile = "{}.hds".format(nm) +headfile = f"{nm}.hds" head_record = [headfile] -budgetfile = "{}.cbb".format(nm) +budgetfile = f"{nm}.cbb" budget_record = [budgetfile] saverecord = [("HEAD", "ALL"), ("BUDGET", "ALL")] oc = flopy.mf6.modflow.mfgwfoc.ModflowGwfoc( @@ -348,7 +348,7 @@ # create modpath files exe_name = "mp7" mp = flopy.modpath.Modpath7( - modelname=nm + "_mp", flowmodel=gwf, exe_name=exe_name, model_ws=ws + modelname=f"{nm}_mp", flowmodel=gwf, exe_name=exe_name, model_ws=ws ) mpbas = flopy.modpath.Modpath7Bas(mp, porosity=0.1, defaultiface=defaultiface6) mpsim = flopy.modpath.Modpath7Sim( @@ -382,7 +382,7 @@ # # Pathline data -fpth = os.path.join(ws, nm + "_mp.mppth") +fpth = os.path.join(ws, f"{nm}_mp.mppth") p = flopy.utils.PathlineFile(fpth) pw1 = p.get_destination_pathline_data(nodew, to_recarray=True) pr1 = p.get_destination_pathline_data(nodesr, to_recarray=True) @@ -391,7 +391,7 @@ # # Get particles that terminate in the well -fpth = os.path.join(ws, nm + "_mp.mpend") +fpth = os.path.join(ws, f"{nm}_mp.mpend") e = flopy.utils.EndpointFile(fpth) well_epd = e.get_destination_endpoint_data(dest_cells=nodew) diff --git a/.docs/Notebooks/modpath7_structured_transient_example.py b/.docs/Notebooks/modpath7_structured_transient_example.py index dac6f53c62..717b77a3e0 100644 --- a/.docs/Notebooks/modpath7_structured_transient_example.py +++ b/.docs/Notebooks/modpath7_structured_transient_example.py @@ -53,9 +53,9 @@ import flopy print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") temp_dir = TemporaryDirectory() sim_name = "mp7_ex03a_mf6" @@ -165,7 +165,7 @@ def fill_zone_1(): ) # groundwater flow (gwf) model -model_nam_file = "{}.nam".format(sim_name) +model_nam_file = f"{sim_name}.nam" gwf = flopy.mf6.ModflowGwf( sim, modelname=sim_name, model_nam_file=model_nam_file, save_flows=True ) @@ -231,9 +231,9 @@ def no_flow(w): drn = flopy.mf6.modflow.mfgwfdrn.ModflowGwfdrn(gwf, stress_period_data={0: dd}) # output control -headfile = "{}.hds".format(sim_name) +headfile = f"{sim_name}.hds" head_record = [headfile] -budgetfile = "{}.cbb".format(sim_name) +budgetfile = f"{sim_name}.cbb" budget_record = [budgetfile] saverecord = [("HEAD", "ALL"), ("BUDGET", "ALL")] oc = flopy.mf6.modflow.mfgwfoc.ModflowGwfoc( diff --git a/.docs/Notebooks/modpath7_unstructured_example.py b/.docs/Notebooks/modpath7_unstructured_example.py index ca69fa873e..8842954b23 100644 --- a/.docs/Notebooks/modpath7_unstructured_example.py +++ b/.docs/Notebooks/modpath7_unstructured_example.py @@ -44,9 +44,9 @@ import flopy print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") # temporary directory temp_dir = TemporaryDirectory() @@ -206,7 +206,7 @@ # create gwf model gwf = flopy.mf6.ModflowGwf( - sim, modelname=model_name, model_nam_file="{}.nam".format(model_name) + sim, modelname=model_name, model_nam_file=f"{model_name}.nam" ) gwf.name_file.save_flows = True @@ -277,8 +277,8 @@ oc = flopy.mf6.ModflowGwfoc( gwf, pname="oc", - budget_filerecord="{}.cbb".format(model_name), - head_filerecord="{}.hds".format(model_name), + budget_filerecord=f"{model_name}.cbb", + head_filerecord=f"{model_name}.hds", headprintrecord=[("COLUMNS", 10, "WIDTH", 15, "DIGITS", 6, "GENERAL")], saverecord=[("HEAD", "ALL"), ("BUDGET", "ALL")], printrecord=[("HEAD", "ALL"), ("BUDGET", "ALL")], @@ -300,7 +300,7 @@ # Plot the boundary conditions on the grid. -fname = os.path.join(model_ws, model_name + ".disv.grb") +fname = os.path.join(model_ws, f"{model_name}.disv.grb") grd = flopy.mf6.utils.MfGrdFile(fname, verbose=False) mg = grd.modelgrid ibd = np.zeros((ncpl), dtype=int) @@ -321,7 +321,7 @@ pc = pmv.plot_array(ibd, cmap=cmap, edgecolor="gray") t = ax.set_title("Boundary Conditions\n") -fname = os.path.join(model_ws, model_name + ".hds") +fname = os.path.join(model_ws, f"{model_name}.hds") hdobj = flopy.utils.HeadFile(fname) head = hdobj.get_data() head.shape @@ -340,9 +340,7 @@ cs = mm.contour_array(head[:, 0, :], colors="white", levels=levels) plt.clabel(cs, fmt="%.1f", colors="white", fontsize=11) cb = plt.colorbar(pc, shrink=0.5) -t = ax.set_title( - "Model Layer {}; hmin={:6.2f}, hmax={:6.2f}".format(ilay + 1, hmin, hmax) -) +t = ax.set_title(f"Model Layer {ilay + 1}; hmin={hmin:6.2f}, hmax={hmax:6.2f}") # Inspect model cells and vertices. @@ -379,8 +377,8 @@ # # Define names for the MODPATH 7 simulations. -mp_namea = model_name + "a_mp" -mp_nameb = model_name + "b_mp" +mp_namea = f"{model_name}a_mp" +mp_nameb = f"{model_name}b_mp" # Create particles for the pathline and timeseries analysis. @@ -419,7 +417,7 @@ ) # create backward particle group -fpth = mp_namea + ".sloc" +fpth = f"{mp_namea}.sloc" pga = flopy.modpath.ParticleGroup( particlegroupname="BACKWARD1", particledata=pa, filename=fpth ) @@ -444,7 +442,7 @@ ) pb = flopy.modpath.NodeParticleData(subdivisiondata=facedata, nodes=nodew) # create forward particle group -fpth = mp_nameb + ".sloc" +fpth = f"{mp_nameb}.sloc" pgb = flopy.modpath.ParticleGroupNodeTemplate( particlegroupname="BACKWARD2", particledata=pb, filename=fpth ) diff --git a/.docs/Notebooks/modpath7_unstructured_lateral_example.py b/.docs/Notebooks/modpath7_unstructured_lateral_example.py index 56e3256541..b201dfd534 100644 --- a/.docs/Notebooks/modpath7_unstructured_lateral_example.py +++ b/.docs/Notebooks/modpath7_unstructured_lateral_example.py @@ -525,7 +525,7 @@ # + mp = flopy.modpath.Modpath7( - modelname=sim_name + "_mp", + modelname=f"{sim_name}_mp", flowmodel=gwf, exe_name="mp7", model_ws=workspace, diff --git a/.docs/Notebooks/mt3d-usgs_example.py b/.docs/Notebooks/mt3d-usgs_example.py index 51eb38a183..54abb150fa 100644 --- a/.docs/Notebooks/mt3d-usgs_example.py +++ b/.docs/Notebooks/mt3d-usgs_example.py @@ -52,9 +52,9 @@ import flopy print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") # + temp_dir = TemporaryDirectory() @@ -302,15 +302,15 @@ def calc_strtElev(X, Y): # remember that lay, row, col need to be zero-based and are adjusted accordingly by flopy # layer + row + col + iseg + irch + rchlen + strtop + slope + strthick + strmbed K - s1 += "0,{}".format(1) - s1 += ",{}".format(y) - s1 += ",{}".format(iseg) - s1 += ",{}".format(irch) - s1 += ",{}".format(delr) - s1 += ",{}".format(strmBed_Elev[y]) - s1 += ",{}".format(0.0001) - s1 += ",{}".format(0.50) - s1 += ",{}\n".format(strhc1) + s1 += f"0,{1}" + s1 += f",{y}" + s1 += f",{iseg}" + s1 += f",{irch}" + s1 += f",{delr}" + s1 += f",{strmBed_Elev[y]}" + s1 += f",{0.0001}" + s1 += f",{0.50}" + s1 += f",{strhc1}\n" fpth = os.path.join(modelpth, "s1.csv") @@ -331,10 +331,7 @@ def calc_strtElev(X, Y): ("strhc1", " (3, 0): - f = open(fpth, "rb") -else: - f = open(fpth, "r") +f = open(fpth, "rb") reach_data = np.genfromtxt(f, delimiter=",", names=True, dtype=dtype) f.close() @@ -352,10 +349,7 @@ def calc_strtElev(X, Y): f.write(s2) f.close() -if sys.version_info > (3, 0): - f = open(fpth, "rb") -else: - f = open(fpth, "r") +f = open(fpth, "rb") segment_data = np.genfromtxt(f, delimiter=",", names=True) f.close() @@ -557,7 +551,7 @@ def calc_strtElev(X, Y): # + # Define a function to read SFT output file def load_ts_from_SFT_output(fname, nd=1): - f = open(fname, "r") + f = open(fname) iline = 0 lst = [] for line in f: @@ -576,7 +570,7 @@ def load_ts_from_SFT_output(fname, nd=1): # Also define a function to read OTIS output file def load_ts_from_otis(fname, iobs=1): - f = open(fname, "r") + f = open(fname) iline = 0 lst = [] for line in f: diff --git a/.docs/Notebooks/mt3dms_examples.py b/.docs/Notebooks/mt3dms_examples.py index 8d1627e81f..51352e4120 100644 --- a/.docs/Notebooks/mt3dms_examples.py +++ b/.docs/Notebooks/mt3dms_examples.py @@ -64,9 +64,9 @@ workdir = temp_dir.name print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") # - @@ -99,7 +99,7 @@ def p01(dirname, al, retardation, lambda1, mixelm): rhob = 0.25 kd = (retardation - 1.0) * prsity / rhob - modelname_mf = dirname + "_mf" + modelname_mf = f"{dirname}_mf" mf = flopy.modflow.Modflow( modelname=modelname_mf, model_ws=model_ws, exe_name=exe_name_mf ) @@ -132,7 +132,7 @@ def p01(dirname, al, retardation, lambda1, mixelm): else: raise ValueError("Failed to run.") - modelname_mt = dirname + "_mt" + modelname_mt = f"{dirname}_mt" mt = flopy.mt3d.Mt3dms( modelname=modelname_mt, model_ws=model_ws, @@ -289,7 +289,7 @@ def p02(dirname, isothm, sp1, sp2, mixelm): laytyp = 0 rhob = 1.587 - modelname_mf = dirname + "_mf" + modelname_mf = f"{dirname}_mf" mf = flopy.modflow.Modflow( modelname=modelname_mf, model_ws=model_ws, exe_name=exe_name_mf ) @@ -317,7 +317,7 @@ def p02(dirname, isothm, sp1, sp2, mixelm): mf.write_input() mf.run_model(silent=True) - modelname_mt = dirname + "_mt" + modelname_mt = f"{dirname}_mt" mt = flopy.mt3d.Mt3dms( modelname=modelname_mt, model_ws=model_ws, @@ -407,7 +407,7 @@ def p02(dirname, isothm, sp1, sp2, mixelm): plt.legend() for beta in [0, 2.0e-3, 1.0e-2, 20.0]: - lbl = "beta={}".format(beta) + lbl = f"beta={beta}" mf, mt, conc, cvt, mvt = p02("nonequilibrium", 4, 0.933, beta, -1) x = cvt["time"] y = cvt["(1, 1, 51)"] / 0.05 @@ -444,7 +444,7 @@ def p03(dirname, mixelm): hk = 1.0 laytyp = 0 - modelname_mf = dirname + "_mf" + modelname_mf = f"{dirname}_mf" mf = flopy.modflow.Modflow( modelname=modelname_mf, model_ws=model_ws, exe_name=exe_name_mf ) @@ -473,7 +473,7 @@ def p03(dirname, mixelm): mf.write_input() mf.run_model(silent=True) - modelname_mt = dirname + "_mt" + modelname_mt = f"{dirname}_mt" mt = flopy.mt3d.Mt3dms( modelname=modelname_mt, model_ws=model_ws, @@ -580,7 +580,7 @@ def p04(dirname, mixelm): hk = 1.0 laytyp = 0 - modelname_mf = dirname + "_mf" + modelname_mf = f"{dirname}_mf" mf = flopy.modflow.Modflow( modelname=modelname_mf, model_ws=model_ws, exe_name=exe_name_mf ) @@ -616,7 +616,7 @@ def p04(dirname, mixelm): mf.write_input() mf.run_model(silent=True) - modelname_mt = dirname + "_mt" + modelname_mt = f"{dirname}_mt" mt = flopy.mt3d.Mt3dms( modelname=modelname_mt, model_ws=model_ws, @@ -738,7 +738,7 @@ def p05(dirname, mixelm, dt0, ttsmult): hk = 1.0 laytyp = 0 - modelname_mf = dirname + "_mf" + modelname_mf = f"{dirname}_mf" mf = flopy.modflow.Modflow( modelname=modelname_mf, model_ws=model_ws, exe_name=exe_name_mf ) @@ -765,7 +765,7 @@ def p05(dirname, mixelm, dt0, ttsmult): mf.write_input() mf.run_model(silent=True) - modelname_mt = dirname + "_mt" + modelname_mt = f"{dirname}_mt" mt = flopy.mt3d.Mt3dms( modelname=modelname_mt, model_ws=model_ws, @@ -883,7 +883,7 @@ def p06(dirname, mixelm, dt0): hk = 0.005 * 86400 laytyp = 0 - modelname_mf = dirname + "_mf" + modelname_mf = f"{dirname}_mf" mf = flopy.modflow.Modflow( modelname=modelname_mf, model_ws=model_ws, exe_name=exe_name_mf ) @@ -912,7 +912,7 @@ def p06(dirname, mixelm, dt0): mf.write_input() mf.run_model(silent=True) - modelname_mt = dirname + "_mt" + modelname_mt = f"{dirname}_mt" mt = flopy.mt3d.Mt3dms( modelname=modelname_mt, model_ws=model_ws, @@ -1031,7 +1031,7 @@ def p07(dirname, mixelm): hk = 0.5 laytyp = 0 - modelname_mf = dirname + "_mf" + modelname_mf = f"{dirname}_mf" mf = flopy.modflow.Modflow( modelname=modelname_mf, model_ws=model_ws, exe_name=exe_name_mf ) @@ -1060,7 +1060,7 @@ def p07(dirname, mixelm): mf.write_input() mf.run_model(silent=True) - modelname_mt = dirname + "_mt" + modelname_mt = f"{dirname}_mt" mt = flopy.mt3d.Mt3dms( modelname=modelname_mt, model_ws=model_ws, @@ -1131,7 +1131,7 @@ def p07(dirname, mixelm): plt.clabel(cs) plt.xlabel("DISTANCE ALONG X-AXIS, IN METERS") plt.ylabel("DISTANCE ALONG Y-AXIS, IN METERS") -plt.title("LAYER {}".format(ilay + 1)) +plt.title(f"LAYER {ilay + 1}") ax = fig.add_subplot(3, 1, 2, aspect="equal") ilay = 5 @@ -1142,7 +1142,7 @@ def p07(dirname, mixelm): plt.clabel(cs) plt.xlabel("DISTANCE ALONG X-AXIS, IN METERS") plt.ylabel("DISTANCE ALONG Y-AXIS, IN METERS") -plt.title("LAYER {}".format(ilay + 1)) +plt.title(f"LAYER {ilay + 1}") ax = fig.add_subplot(3, 1, 3, aspect="equal") ilay = 6 @@ -1153,7 +1153,7 @@ def p07(dirname, mixelm): plt.clabel(cs) plt.xlabel("DISTANCE ALONG X-AXIS, IN METERS") plt.ylabel("DISTANCE ALONG Y-AXIS, IN METERS") -plt.title("LAYER {}".format(ilay + 1)) +plt.title(f"LAYER {ilay + 1}") plt.plot(grid.xcellcenters[7, 2], grid.ycellcenters[7, 2], "ko") plt.tight_layout() @@ -1189,7 +1189,7 @@ def p08(dirname, mixelm): hk[11:19, :, 36:] = k2 laytyp = 6 * [1] + 21 * [0] - modelname_mf = dirname + "_mf" + modelname_mf = f"{dirname}_mf" mf = flopy.modflow.Modflow( modelname=modelname_mf, model_ws=model_ws, exe_name=exe_name_mf ) @@ -1220,7 +1220,7 @@ def p08(dirname, mixelm): mf.write_input() mf.run_model(silent=True) - modelname_mt = dirname + "_mt" + modelname_mt = f"{dirname}_mt" mt = flopy.mt3d.Mt3dms( modelname=modelname_mt, model_ws=model_ws, @@ -1355,7 +1355,7 @@ def p09(dirname, mixelm, nadvfd): hk = k1 * np.ones((nlay, nrow, ncol), dtype=float) hk[:, 5:8, 1:8] = k2 - modelname_mf = dirname + "_mf" + modelname_mf = f"{dirname}_mf" mf = flopy.modflow.Modflow( modelname=modelname_mf, model_ws=model_ws, exe_name=exe_name_mf ) @@ -1387,7 +1387,7 @@ def p09(dirname, mixelm, nadvfd): mf.write_input() mf.run_model(silent=True) - modelname_mt = dirname + "_mt" + modelname_mt = f"{dirname}_mt" mt = flopy.mt3d.Mt3dms( modelname=modelname_mt, model_ws=model_ws, @@ -1558,7 +1558,7 @@ def p10(dirname, mixelm, perlen=1000, isothm=1, sp2=0.0, ttsmult=1.2): vka = 0.1 laytyp = 0 - modelname_mf = dirname + "_mf" + modelname_mf = f"{dirname}_mf" mf = flopy.modflow.Modflow( modelname=modelname_mf, model_ws=model_ws, exe_name=exe_name_mf ) @@ -1613,7 +1613,7 @@ def p10(dirname, mixelm, perlen=1000, isothm=1, sp2=0.0, ttsmult=1.2): os.remove(fname) mf.run_model(silent=True) - modelname_mt = dirname + "_mt" + modelname_mt = f"{dirname}_mt" mt = flopy.mt3d.Mt3dms( modelname=modelname_mt, model_ws=model_ws, @@ -1717,7 +1717,7 @@ def p10(dirname, mixelm, perlen=1000, isothm=1, sp2=0.0, ttsmult=1.2): plt.ylim(9100, 9100 + 45 * 50) plt.xlabel("DISTANCE ALONG X-AXIS, IN METERS") plt.ylabel("DISTANCE ALONG Y-AXIS, IN METERS") -plt.title("LAYER {} INITIAL CONCENTRATION".format(3)) +plt.title(f"LAYER {3} INITIAL CONCENTRATION") for k, i, j, q in mf.wel.stress_period_data[0]: plt.plot(grid.xcellcenters[i, j], grid.ycellcenters[i, j], "ks") @@ -1732,7 +1732,7 @@ def p10(dirname, mixelm, perlen=1000, isothm=1, sp2=0.0, ttsmult=1.2): plt.ylim(9100, 9100 + 45 * 50) plt.xlabel("DISTANCE ALONG X-AXIS, IN METERS") plt.ylabel("DISTANCE ALONG Y-AXIS, IN METERS") -plt.title("LAYER {} TIME = 500 DAYS".format(3)) +plt.title(f"LAYER {3} TIME = 500 DAYS") for k, i, j, q in mf.wel.stress_period_data[0]: plt.plot(grid.xcellcenters[i, j], grid.ycellcenters[i, j], "ks") @@ -1747,7 +1747,7 @@ def p10(dirname, mixelm, perlen=1000, isothm=1, sp2=0.0, ttsmult=1.2): plt.ylim(9100, 9100 + 45 * 50) plt.xlabel("DISTANCE ALONG X-AXIS, IN METERS") plt.ylabel("DISTANCE ALONG Y-AXIS, IN METERS") -plt.title("LAYER {} TIME = 750 DAYS".format(3)) +plt.title(f"LAYER {3} TIME = 750 DAYS") for k, i, j, q in mf.wel.stress_period_data[0]: plt.plot(grid.xcellcenters[i, j], grid.ycellcenters[i, j], "ks") @@ -1762,7 +1762,7 @@ def p10(dirname, mixelm, perlen=1000, isothm=1, sp2=0.0, ttsmult=1.2): plt.ylim(9100, 9100 + 45 * 50) plt.xlabel("DISTANCE ALONG X-AXIS, IN METERS") plt.ylabel("DISTANCE ALONG Y-AXIS, IN METERS") -plt.title("LAYER {} TIME = 1000 DAYS".format(3)) +plt.title(f"LAYER {3} TIME = 1000 DAYS") for k, i, j, q in mf.wel.stress_period_data[0]: plt.plot(grid.xcellcenters[i, j], grid.ycellcenters[i, j], "ks") diff --git a/.docs/Notebooks/mt3dms_sft_lkt_uzt_tutorial.py b/.docs/Notebooks/mt3dms_sft_lkt_uzt_tutorial.py index e37df9ee78..4b91941dee 100644 --- a/.docs/Notebooks/mt3dms_sft_lkt_uzt_tutorial.py +++ b/.docs/Notebooks/mt3dms_sft_lkt_uzt_tutorial.py @@ -47,9 +47,9 @@ import flopy print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") # - # Create a MODFLOW model and store it, in this case in the variable 'mf'. @@ -441,7 +441,7 @@ bdlknc_lyr1 = LakArr_lyr1.copy() bdlknc_lyr2 = LakArr_lyr1.copy() -bdlknc_lyr3 = np.zeros((LakArr_lyr1.shape)) +bdlknc_lyr3 = np.zeros(LakArr_lyr1.shape) # Need to expand bdlknc_lyr1 non-zero values by 1 in either direction # (left/right and up/down) diff --git a/.docs/Notebooks/mt3dms_ssm_package_tutorial.py b/.docs/Notebooks/mt3dms_ssm_package_tutorial.py index 05f3c0398a..292bb66e5d 100644 --- a/.docs/Notebooks/mt3dms_ssm_package_tutorial.py +++ b/.docs/Notebooks/mt3dms_ssm_package_tutorial.py @@ -37,8 +37,8 @@ import flopy print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"flopy version: {flopy.__version__}") # - # First, we will create a simple model structure @@ -156,8 +156,8 @@ # And finally, modify the ```vdf``` package to fix ```indense```. -fname = modelname + ".vdf" -f = open(os.path.join(model_ws, fname), "r") +fname = f"{modelname}.vdf" +f = open(os.path.join(model_ws, fname)) lines = f.readlines() f.close() f = open(os.path.join(model_ws, fname), "w") diff --git a/.docs/Notebooks/nwt_option_blocks_tutorial.py b/.docs/Notebooks/nwt_option_blocks_tutorial.py index 48ff4a7367..e57063a528 100644 --- a/.docs/Notebooks/nwt_option_blocks_tutorial.py +++ b/.docs/Notebooks/nwt_option_blocks_tutorial.py @@ -38,7 +38,7 @@ from flopy.utils import OptionBlock print(sys.version) -print("flopy version: {}".format(flopy.__version__)) +print(f"flopy version: {flopy.__version__}") # + load_ws = os.path.join("..", "..", "examples", "data", "options", "sagehen") diff --git a/.docs/Notebooks/pest_tutorial01.py b/.docs/Notebooks/pest_tutorial01.py index 02775a2d67..02e36e75b0 100644 --- a/.docs/Notebooks/pest_tutorial01.py +++ b/.docs/Notebooks/pest_tutorial01.py @@ -33,8 +33,8 @@ import flopy print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"flopy version: {flopy.__version__}") # - # This notebook will work with a simple model using the dimensions below @@ -87,7 +87,7 @@ # At this point, the lpf template file will have been created. The following block will print the template file. -lines = open(os.path.join(workspace, "mymodel.lpf.tpl"), "r").readlines() +lines = open(os.path.join(workspace, "mymodel.lpf.tpl")).readlines() for l in lines: print(l.strip()) @@ -114,7 +114,7 @@ tw.write_template() # - -lines = open(os.path.join(workspace, "mymodel.lpf.tpl"), "r").readlines() +lines = open(os.path.join(workspace, "mymodel.lpf.tpl")).readlines() for l in lines: print(l.strip()) @@ -160,7 +160,7 @@ tw.write_template() # Print contents of template file -lines = open(os.path.join(workspace, "mymodel.lpf.tpl"), "r").readlines() +lines = open(os.path.join(workspace, "mymodel.lpf.tpl")).readlines() for l in lines: print(l.strip()) @@ -210,7 +210,7 @@ tw.write_template() # Print the results -lines = open(os.path.join(workspace, "mymodel.rch.tpl"), "r").readlines() +lines = open(os.path.join(workspace, "mymodel.rch.tpl")).readlines() for l in lines: print(l.strip()) @@ -260,7 +260,7 @@ tw.write_template() # Print the results -lines = open(os.path.join(workspace, "mymodel.rch.tpl"), "r").readlines() +lines = open(os.path.join(workspace, "mymodel.rch.tpl")).readlines() for l in lines: print(l.strip()) # - diff --git a/.docs/Notebooks/plot_array_example.py b/.docs/Notebooks/plot_array_example.py index a2bd4f3e6f..92a4f81c7e 100644 --- a/.docs/Notebooks/plot_array_example.py +++ b/.docs/Notebooks/plot_array_example.py @@ -41,9 +41,9 @@ import flopy print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") # + # Set name of MODFLOW exe @@ -82,10 +82,10 @@ # confirm that the model files have been created for f in files: if os.path.isfile(os.path.join(modelpth, f)): - msg = "Output file located: {}".format(f) + msg = f"Output file located: {f}" print(msg) else: - errmsg = "Error. Output file cannot be found: {}".format(f) + errmsg = f"Error. Output file cannot be found: {f}" print(errmsg) # - diff --git a/.docs/Notebooks/plot_cross_section_example.py b/.docs/Notebooks/plot_cross_section_example.py index a17f1b12f6..910c532d26 100644 --- a/.docs/Notebooks/plot_cross_section_example.py +++ b/.docs/Notebooks/plot_cross_section_example.py @@ -45,9 +45,9 @@ import flopy print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") # + pycharm={"name": "#%%\n"} # Set names of the MODFLOW exes @@ -83,10 +83,10 @@ files = ["freyberg.hds", "freyberg.cbc"] for f in files: if os.path.isfile(os.path.join(str(modelpth), f)): - msg = "Output file located: {}".format(f) + msg = f"Output file located: {f}" print(msg) else: - errmsg = "Error. Output file cannot be found: {}".format(f) + errmsg = f"Error. Output file cannot be found: {f}" print(errmsg) # + [markdown] pycharm={"name": "#%% md\n"} @@ -438,10 +438,10 @@ files = ["freyberg.hds", "freyberg.cbc"] for f in files: if os.path.isfile(os.path.join(str(modelpth), f)): - msg = "Output file located: {}".format(f) + msg = f"Output file located: {f}" print(msg) else: - errmsg = "Error. Output file cannot be found: {}".format(f) + errmsg = f"Error. Output file cannot be found: {f}" print(errmsg) # + [markdown] pycharm={"name": "#%% md\n"} @@ -537,10 +537,10 @@ files = ["mp7p2.hds", "mp7p2.cbb"] for f in files: if os.path.isfile(os.path.join(modelpth, f)): - msg = "Output file located: {}".format(f) + msg = f"Output file located: {f}" print(msg) else: - errmsg = "Error. Output file cannot be found: {}".format(f) + errmsg = f"Error. Output file cannot be found: {f}" print(errmsg) # + pycharm={"name": "#%%\n"} @@ -754,7 +754,7 @@ with styles.USGSPlot(): fig, axes = plt.subplots(3, 1, figsize=(15, 9), tight_layout=True) for ix, totim in enumerate(plot_times): - heading = "Time = {}".format(totim) + heading = f"Time = {totim}" conc = cobj.get_data(totim=totim) ax = axes[ix] xsect = flopy.plot.PlotCrossSection(model=gwf6, ax=ax, line={"row": 0}) diff --git a/.docs/Notebooks/plot_map_view_example.py b/.docs/Notebooks/plot_map_view_example.py index d24854c7a5..837e790a58 100644 --- a/.docs/Notebooks/plot_map_view_example.py +++ b/.docs/Notebooks/plot_map_view_example.py @@ -47,9 +47,9 @@ import flopy print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") # + pycharm={"name": "#%%\n"} # Set name of MODFLOW exe @@ -86,10 +86,10 @@ files = ["freyberg.hds", "freyberg.cbc"] for f in files: if os.path.isfile(os.path.join(modelpth, f)): - msg = "Output file located: {}".format(f) + msg = f"Output file located: {f}" print(msg) else: - errmsg = "Error. Output file cannot be found: {}".format(f) + errmsg = f"Error. Output file cannot be found: {f}" print(errmsg) # + [markdown] pycharm={"name": "#%% md\n"} @@ -437,7 +437,7 @@ # construct maximum travel time to plot (200 years - MODFLOW time unit is seconds) travel_time_max = 200.0 * 365.25 * 24.0 * 60.0 * 60.0 -ctt = "<={}".format(travel_time_max) +ctt = f"<={travel_time_max}" # plot the pathlines mapview.plot_pathline(plines, layer="all", colors="red", travel_time=ctt) @@ -607,10 +607,10 @@ files = ["freyberg.hds", "freyberg.cbc"] for f in files: if os.path.isfile(os.path.join(modelpth, f)): - msg = "Output file located: {}".format(f) + msg = f"Output file located: {f}" print(msg) else: - errmsg = "Error. Output file cannot be found: {}".format(f) + errmsg = f"Error. Output file cannot be found: {f}" print(errmsg) # + [markdown] pycharm={"name": "#%% md\n"} @@ -716,10 +716,10 @@ files = ["mp7p2.hds", "mp7p2.cbb"] for f in files: if os.path.isfile(os.path.join(modelpth, f)): - msg = "Output file located: {}".format(f) + msg = f"Output file located: {f}" print(msg) else: - errmsg = "Error. Output file cannot be found: {}".format(f) + errmsg = f"Error. Output file cannot be found: {f}" print(errmsg) # + pycharm={"name": "#%%\n"} @@ -820,11 +820,11 @@ # + pycharm={"name": "#%%\n"} # load the MODPATH-7 results mp_namea = "mp7p2a_mp" -fpth = os.path.join(modelpth, mp_namea + ".mppth") +fpth = os.path.join(modelpth, f"{mp_namea}.mppth") p = flopy.utils.PathlineFile(fpth) p0 = p.get_alldata() -fpth = os.path.join(modelpth, mp_namea + ".timeseries") +fpth = os.path.join(modelpth, f"{mp_namea}.timeseries") ts = flopy.utils.TimeseriesFile(fpth) ts0 = ts.get_alldata() @@ -899,7 +899,7 @@ def load_verts(fname): def load_iverts(fname): - f = open(fname, "r") + f = open(fname) iverts = [] xc = [] yc = [] @@ -926,11 +926,11 @@ def load_iverts(fname): # + pycharm={"name": "#%%\n"} # Print the first 5 entries in verts and iverts for ivert, v in enumerate(verts[:5]): - print("Vertex coordinate pair for vertex {}: {}".format(ivert, v)) + print(f"Vertex coordinate pair for vertex {ivert}: {v}") print("...\n") for icell, vertlist in enumerate(iverts[:5]): - print("List of vertices for cell {}: {}".format(icell, vertlist)) + print(f"List of vertices for cell {icell}: {vertlist}") # + [markdown] pycharm={"name": "#%% md\n"} # A flopy `UnstructuredGrid` object can now be created using the vertices and incidence list. The `UnstructuredGrid` object is a key part of the plotting capabilities in flopy. In addition to the vertex information, the `UnstructuredGrid` object also needs to know how many cells are in each layer. This is specified in the ncpl variable, which is a list of cells per layer. @@ -954,7 +954,7 @@ def load_iverts(fname): # Create a random array for layer 0, and then plot it with a color flood and contours f = plt.figure(figsize=(10, 10)) -a = np.random.random((ncpl[0])) * 100 +a = np.random.random(ncpl[0]) * 100 levels = np.arange(0, 100, 30) mapview = flopy.plot.PlotMapView(modelgrid=umg) diff --git a/.docs/Notebooks/raster_intersection_example.py b/.docs/Notebooks/raster_intersection_example.py index 599ce8bc9a..da466c67d2 100644 --- a/.docs/Notebooks/raster_intersection_example.py +++ b/.docs/Notebooks/raster_intersection_example.py @@ -47,11 +47,11 @@ from flopy.utils import Raster print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("pandas version: {}".format(pd.__version__)) -print("shapely version: {}".format(shapely.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"pandas version: {pd.__version__}") +print(f"shapely version: {shapely.__version__}") +print(f"flopy version: {flopy.__version__}") # - # temporary directory @@ -151,7 +151,7 @@ ax = pmv.plot_array( dem_data, masked_values=rio.nodatavals, vmin=vmin, vmax=vmax ) -plt.title("Resample time, nearest neighbor: {:.3f} sec".format(resample_time)) +plt.title(f"Resample time, nearest neighbor: {resample_time:.3f} sec") plt.colorbar(ax, shrink=0.7) # - @@ -170,7 +170,7 @@ ax = pmv.plot_array( dem_data, masked_values=rio.nodatavals, vmin=vmin, vmax=vmax ) -plt.title("Resample time, bi-linear: {:.3f} sec".format(resample_time)) +plt.title(f"Resample time, bi-linear: {resample_time:.3f} sec") plt.colorbar(ax, shrink=0.7) # - @@ -189,7 +189,7 @@ ax = pmv.plot_array( dem_data, masked_values=rio.nodatavals, vmin=vmin, vmax=vmax ) -plt.title("Resample time, bi-cubic: {:.3f} sec".format(resample_time)) +plt.title(f"Resample time, bi-cubic: {resample_time:.3f} sec") plt.colorbar(ax, shrink=0.7) # - @@ -211,7 +211,7 @@ ax = pmv.plot_array( dem_data, masked_values=rio.nodatavals, vmin=vmin, vmax=vmax ) -plt.title("Resample time, median: {:.3f} sec".format(resample_time)) +plt.title(f"Resample time, median: {resample_time:.3f} sec") plt.colorbar(ax, shrink=0.7) # - @@ -279,7 +279,7 @@ vmin=vmin, vmax=vmax, ) -plt.title("Resample time, nearest neighbor: {:.3f} sec".format(resample_time)) +plt.title(f"Resample time, nearest neighbor: {resample_time:.3f} sec") plt.colorbar(ax, shrink=0.7) # + @@ -303,7 +303,7 @@ vmin=vmin, vmax=vmax, ) -plt.title("Resample time, bi-linear: {:.3f} sec".format(resample_time)) +plt.title(f"Resample time, bi-linear: {resample_time:.3f} sec") plt.colorbar(ax, shrink=0.7) # + @@ -329,7 +329,7 @@ vmin=vmin, vmax=vmax, ) -plt.title("Resample time, median: {:.3f} sec".format(resample_time)) +plt.title(f"Resample time, median: {resample_time:.3f} sec") plt.colorbar(ax, shrink=0.7) # - @@ -431,7 +431,7 @@ ax = rio.plot(ax=ax, vmin=vmin, vmax=vmax) ax.plot(shape.T[0], shape.T[1], "r-") -plt.title("Cropping time: {:.3f} sec".format(crop_time)) +plt.title(f"Cropping time: {crop_time:.3f} sec") plt.colorbar(ax.images[0], shrink=0.7) # - @@ -459,7 +459,7 @@ vmax=vmax, ) plt.plot(shape.T[0], shape.T[1], "r-") -plt.title("Resample time, nearest neighbor: {:.3f} sec".format(resample_time)) +plt.title(f"Resample time, nearest neighbor: {resample_time:.3f} sec") plt.colorbar(ax, shrink=0.7) # + @@ -484,7 +484,7 @@ vmax=vmax, ) plt.plot(shape.T[0], shape.T[1], "r-") -plt.title("Resample time, bi-linear: {:.3f} sec".format(resample_time)) +plt.title(f"Resample time, bi-linear: {resample_time:.3f} sec") plt.colorbar(ax, shrink=0.7) # - @@ -533,7 +533,7 @@ shape = np.array(polygon).T plt.plot(shape[0], shape[1], "r-") -plt.title("Cropped Arbitrary Polygon: {:.3f} sec".format(crop_time)) +plt.title(f"Cropped Arbitrary Polygon: {crop_time:.3f} sec") plt.colorbar(ax.images[0], shrink=0.7) # - diff --git a/.docs/Notebooks/save_binary_data_file_example.py b/.docs/Notebooks/save_binary_data_file_example.py index 051d502e20..ff7d0a3fe3 100644 --- a/.docs/Notebooks/save_binary_data_file_example.py +++ b/.docs/Notebooks/save_binary_data_file_example.py @@ -39,9 +39,9 @@ import flopy print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") # + nlay, nrow, ncol = 1, 20, 10 diff --git a/.docs/Notebooks/seawat_henry_example.py b/.docs/Notebooks/seawat_henry_example.py index 9d68ab030b..417e20b042 100644 --- a/.docs/Notebooks/seawat_henry_example.py +++ b/.docs/Notebooks/seawat_henry_example.py @@ -33,8 +33,8 @@ import flopy print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"flopy version: {flopy.__version__}") # + pycharm={"name": "#%%\n"} # temporary directory @@ -152,8 +152,8 @@ # Try to delete the output files, to prevent accidental use of older files try: os.remove(os.path.join(workspace, "MT3D001.UCN")) - os.remove(os.path.join(workspace, modelname + ".hds")) - os.remove(os.path.join(workspace, modelname + ".cbc")) + os.remove(os.path.join(workspace, f"{modelname}.hds")) + os.remove(os.path.join(workspace, f"{modelname}.cbc")) except: pass diff --git a/.docs/Notebooks/sfrpackage_example.py b/.docs/Notebooks/sfrpackage_example.py index 517a1cdb88..292f7d6fec 100644 --- a/.docs/Notebooks/sfrpackage_example.py +++ b/.docs/Notebooks/sfrpackage_example.py @@ -50,10 +50,10 @@ mpl.rcParams["figure.figsize"] = (11, 8.5) print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("pandas version: {}".format(pd.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"pandas version: {pd.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") # - # Set name of MODFLOW exe diff --git a/.docs/Notebooks/shapefile_export_example.py b/.docs/Notebooks/shapefile_export_example.py index 95377b71fd..8026c86d20 100644 --- a/.docs/Notebooks/shapefile_export_example.py +++ b/.docs/Notebooks/shapefile_export_example.py @@ -38,9 +38,9 @@ import flopy print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") # + # temporary directory @@ -71,7 +71,7 @@ # ## Declarative export using attached `.export()` methods # #### Export the whole model to a single shapefile -fname = "{}/model.shp".format(outdir) +fname = f"{outdir}/model.shp" m.export(fname) ax = plt.subplot(1, 1, 1, aspect="equal") @@ -81,7 +81,7 @@ ax.set_ylim(extents[2], extents[3]) ax.set_title(fname) -fname = "{}/wel.shp".format(outdir) +fname = f"{outdir}/wel.shp" m.wel.export(fname) # ### Export a package to a shapefile @@ -90,8 +90,8 @@ m.lpf.hk -fname = "{}/hk.shp".format(outdir) -m.lpf.hk.export("{}/hk.shp".format(outdir)) +fname = f"{outdir}/hk.shp" +m.lpf.hk.export(f"{outdir}/hk.shp") ax = plt.subplot(1, 1, 1, aspect="equal") extents = grid.extent @@ -103,14 +103,14 @@ m.riv.stress_period_data -m.riv.stress_period_data.export("{}/riv_spd.shp".format(outdir)) +m.riv.stress_period_data.export(f"{outdir}/riv_spd.shp") # ### MfList.export() exports the whole grid by default, regardless of the locations of the boundary cells # `sparse=True` only exports the boundary cells in the MfList -m.riv.stress_period_data.export("{}/riv_spd.shp".format(outdir), sparse=True) +m.riv.stress_period_data.export(f"{outdir}/riv_spd.shp", sparse=True) -m.wel.stress_period_data.export("{}/wel_spd.shp".format(outdir), sparse=True) +m.wel.stress_period_data.export(f"{outdir}/wel_spd.shp", sparse=True) # ## Ad-hoc exporting using `recarray2shp` # * The main idea is to create a recarray with all of the attribute information, and a list of geometry features (one feature per row in the recarray) @@ -142,7 +142,7 @@ # ##### write the shapefile -fname = "{}/bcs.shp".format(outdir) +fname = f"{outdir}/bcs.shp" recarray2shp(spd.to_records(), geoms=polygons, shpname=fname, crs=grid.epsg) ax = plt.subplot(1, 1, 1, aspect="equal") @@ -172,7 +172,7 @@ geoms = [Point(x, y) for x, y in zip(welldata.x_utm, welldata.y_utm)] -fname = "{}/wel_data.shp".format(outdir) +fname = f"{outdir}/wel_data.shp" recarray2shp(welldata.to_records(), geoms=geoms, shpname=fname, crs=grid.epsg) # - @@ -202,7 +202,7 @@ rivdata = pd.DataFrame(m.riv.stress_period_data[0]) rivdata["reach"] = np.arange(len(lines)) -lines_shapefile = "{}/riv_reaches.shp".format(outdir) +lines_shapefile = f"{outdir}/riv_reaches.shp" recarray2shp( rivdata.to_records(index=False), geoms=lines, @@ -271,12 +271,12 @@ ) # exporting an entire model -m.export("{}/freyberg.shp".format(outdir), modelgrid=modelgrid) +m.export(f"{outdir}/freyberg.shp", modelgrid=modelgrid) # - # And for a specific parameter the method is the same -fname = "{}/hk.shp".format(outdir) +fname = f"{outdir}/hk.shp" m.lpf.hk.export(fname, modelgrid=modelgrid) ax = plt.subplot(1, 1, 1, aspect="equal") diff --git a/.docs/Notebooks/shapefile_feature_examples.py b/.docs/Notebooks/shapefile_feature_examples.py index c43a08d62e..4d0bf9ff76 100644 --- a/.docs/Notebooks/shapefile_feature_examples.py +++ b/.docs/Notebooks/shapefile_feature_examples.py @@ -45,9 +45,9 @@ warnings.simplefilter("ignore", UserWarning) print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") # - # ### write a numpy record array to a shapefile diff --git a/.docs/Notebooks/swi2package_example1.py b/.docs/Notebooks/swi2package_example1.py index 7f9ac1c1e6..4241716e6d 100644 --- a/.docs/Notebooks/swi2package_example1.py +++ b/.docs/Notebooks/swi2package_example1.py @@ -40,9 +40,9 @@ import flopy print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") # - # Define model name of your model and the location of MODFLOW executable. All MODFLOW files and output will be stored in the subdirectory defined by the workspace. Create a model named `ml` and specify that this is a MODFLOW-2005 model. @@ -147,11 +147,11 @@ # Load the head and zeta data from the file # read model heads -hfile = flopy.utils.HeadFile(os.path.join(ml.model_ws, modelname + ".hds")) +hfile = flopy.utils.HeadFile(os.path.join(ml.model_ws, f"{modelname}.hds")) head = hfile.get_alldata() # read model zeta zfile = flopy.utils.CellBudgetFile( - os.path.join(ml.model_ws, modelname + ".zta") + os.path.join(ml.model_ws, f"{modelname}.zta") ) kstpkper = zfile.get_kstpkper() zeta = [] diff --git a/.docs/Notebooks/swi2package_example4.py b/.docs/Notebooks/swi2package_example4.py index 48db513d9a..5ecac1b63c 100644 --- a/.docs/Notebooks/swi2package_example4.py +++ b/.docs/Notebooks/swi2package_example4.py @@ -45,9 +45,9 @@ import flopy print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") # - # Define model name of your model and the location of MODFLOW executable. All MODFLOW files and output will be stored in the subdirectory defined by the workspace. Create a model named `ml` and specify that this is a MODFLOW-2005 model. @@ -307,7 +307,7 @@ # read base model zeta zfile = flopy.utils.CellBudgetFile( - os.path.join(ml.model_ws, modelname + ".zta") + os.path.join(ml.model_ws, f"{modelname}.zta") ) kstpkper = zfile.get_kstpkper() zeta = [] @@ -316,14 +316,14 @@ zeta = np.array(zeta) # read swi obs zobs = np.genfromtxt( - os.path.join(ml.model_ws, modelname + ".zobs.out"), names=True + os.path.join(ml.model_ws, f"{modelname}.zobs.out"), names=True ) # Load the simulation 2 `ZETA` data and `ZETA` observations. # read saltwater well model zeta zfile2 = flopy.utils.CellBudgetFile( - os.path.join(ml2.model_ws, modelname2 + ".zta") + os.path.join(ml2.model_ws, f"{modelname2}.zta") ) kstpkper = zfile2.get_kstpkper() zeta2 = [] @@ -332,7 +332,7 @@ zeta2 = np.array(zeta2) # read swi obs zobs2 = np.genfromtxt( - os.path.join(ml2.model_ws, modelname2 + ".zobs.out"), names=True + os.path.join(ml2.model_ws, f"{modelname2}.zobs.out"), names=True ) # Create arrays for the x-coordinates and the output years @@ -379,7 +379,7 @@ drawstyle="steps-mid", linewidth=0.5, color=cc[idx], - label="{:2d} years".format(years[idx]), + label=f"{years[idx]:2d} years", ) # layer 2 ax.plot( @@ -437,7 +437,7 @@ drawstyle="steps-mid", linewidth=0.5, color=cc[idx - 5], - label="{:2d} years".format(years[idx]), + label=f"{years[idx]:2d} years", ) # layer 2 ax.plot( @@ -495,7 +495,7 @@ drawstyle="steps-mid", linewidth=0.5, color=cc[idx - 5], - label="{:2d} years".format(years[idx]), + label=f"{years[idx]:2d} years", ) # layer 2 ax.plot( @@ -613,14 +613,14 @@ zeta[4, :, :, :], colors=colors, ax=ax, edgecolors="none" ) linecollection = modelxsect.plot_grid(ax=ax) -ax.set_title("Recharge year {}".format(years[4])) +ax.set_title(f"Recharge year {years[4]}") ax = fig.add_subplot(1, 2, 2) ax.set_xlim(0, 3050) ax.set_ylim(-50, -10) modelxsect.plot_fill_between(zeta[-1, :, :, :], colors=colors, ax=ax) linecollection = modelxsect.plot_grid(ax=ax) -ax.set_title("Scenario year {}".format(years[-1])) +ax.set_title(f"Scenario year {years[-1]}") # - try: diff --git a/.docs/Notebooks/uzf_example.py b/.docs/Notebooks/uzf_example.py index 529d6fa208..67f934d0ac 100644 --- a/.docs/Notebooks/uzf_example.py +++ b/.docs/Notebooks/uzf_example.py @@ -52,10 +52,10 @@ from flopy.utils import flopy_io print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("pandas version: {}".format(pd.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"pandas version: {pd.__version__}") +print(f"flopy version: {flopy.__version__}") # - # Set name of MODFLOW exe @@ -240,7 +240,7 @@ uzfbdobjct = flopy.utils.CellBudgetFile(fpth) uzfbdobjct.list_records() else: - print('"{}" is not available'.format(fpth)) + print(f'"{fpth}" is not available') if success and avail: r = uzfbdobjct.get_data(text="UZF RECHARGE") diff --git a/.docs/Notebooks/vtk_pathlines_example.py b/.docs/Notebooks/vtk_pathlines_example.py index ec649fa9d6..647c8d56f1 100644 --- a/.docs/Notebooks/vtk_pathlines_example.py +++ b/.docs/Notebooks/vtk_pathlines_example.py @@ -29,7 +29,7 @@ import flopy print(sys.version) -print("flopy version: {}".format(flopy.__version__)) +print(f"flopy version: {flopy.__version__}") # - # Load the Freyberg MF6 groundwater flow model. diff --git a/.docs/Notebooks/zonebudget_example.py b/.docs/Notebooks/zonebudget_example.py index 4ec326c67b..fa48a8a17e 100644 --- a/.docs/Notebooks/zonebudget_example.py +++ b/.docs/Notebooks/zonebudget_example.py @@ -43,10 +43,10 @@ import flopy print(sys.version) -print("numpy version: {}".format(np.__version__)) -print("matplotlib version: {}".format(mpl.__version__)) -print("pandas version: {}".format(pd.__version__)) -print("flopy version: {}".format(flopy.__version__)) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"pandas version: {pd.__version__}") +print(f"flopy version: {flopy.__version__}") # + # temporary workspace @@ -119,9 +119,9 @@ rowidx = np.in1d(cmdbud["name"], names) colidx = "ZONE_1" -print("{:,.1f} cubic meters/day".format(cmdbud[rowidx][colidx][0])) -print("{:,.1f} cubic feet/day".format(cfdbud[rowidx][colidx][0])) -print("{:,.1f} inches/year".format(inyrbud[rowidx][colidx][0])) +print(f"{cmdbud[rowidx][colidx][0]:,.1f} cubic meters/day") +print(f"{cfdbud[rowidx][colidx][0]:,.1f} cubic feet/day") +print(f"{inyrbud[rowidx][colidx][0]:,.1f} inches/year") # - cmd is cfd @@ -205,7 +205,7 @@ print(pd.read_csv(f_out).to_string(index=False)) except: - with open(f_out, "r") as f: + with open(f_out) as f: for line in f.readlines(): print("\t".join(line.split(","))) # - @@ -236,7 +236,7 @@ # + def tick_label_formatter_comma_sep(x, pos): - return "{:,.0f}".format(x) + return f"{x:,.0f}" def volumetric_budget_bar_plot(values_in, values_out, labels, **kwargs): @@ -268,7 +268,7 @@ def volumetric_budget_bar_plot(values_in, values_out, labels, **kwargs): plt.ylim([ymin, ymax * 1.25]) for i, rect in enumerate(rects_in): - label = "{:,.0f}".format(values_in[i]) + label = f"{values_in[i]:,.0f}" height = values_in[i] x = rect.get_x() + rect.get_width() / 2 y = height + (0.02 * ymax) @@ -284,7 +284,7 @@ def volumetric_budget_bar_plot(values_in, values_out, labels, **kwargs): ) for i, rect in enumerate(rects_out): - label = "{:,.0f}".format(values_out[i]) + label = f"{values_out[i]:,.0f}" height = values_out[i] x = rect.get_x() + rect.get_width() / 2 y = height + (0.02 * ymin) @@ -325,10 +325,8 @@ def volumetric_budget_bar_plot(values_in, values_out, labels, **kwargs): ) recname = "STORAGE" - values_in = zb.get_dataframes(names="FROM_{}".format(recname)).T.squeeze() - values_out = ( - zb.get_dataframes(names="TO_{}".format(recname)).T.squeeze() * -1 - ) + values_in = zb.get_dataframes(names=f"FROM_{recname}").T.squeeze() + values_out = zb.get_dataframes(names=f"TO_{recname}").T.squeeze() * -1 labels = values_in.index.tolist() rects_in, rects_out = volumetric_budget_bar_plot( @@ -336,7 +334,7 @@ def volumetric_budget_bar_plot(values_in, values_out, labels, **kwargs): ) plt.ylabel("Volumetric rate, in Mgal/d") - plt.title("{} @ totim = {}".format(recname, t)) + plt.title(f"{recname} @ totim = {t}") plt.tight_layout() plt.show() From 8c330a7890fe732c5489ce6a2d5184a7b2fb70ee Mon Sep 17 00:00:00 2001 From: spaulins-usgs Date: Thu, 27 Jul 2023 08:30:33 -0700 Subject: [PATCH 015/101] fix(check): check now works properly with confined conditions (#1880) (#1882) * fix(check): check now works properly with confined conditions (#1880) * github workflows undo --------- Co-authored-by: scottrp <45947939+scottrp@users.noreply.github.com> --- autotest/regression/test_mf6.py | 7 ++++++- flopy/pakbase.py | 9 +++++++++ 2 files changed, 15 insertions(+), 1 deletion(-) diff --git a/autotest/regression/test_mf6.py b/autotest/regression/test_mf6.py index 2f8b1a2418..6b8b8d207e 100644 --- a/autotest/regression/test_mf6.py +++ b/autotest/regression/test_mf6.py @@ -948,9 +948,14 @@ def test_np002(function_tmpdir, example_data_path): oc_package.printrecord.set_data([("HEAD", "ALL"), ("BUDGET", "ALL")], 1) sto_package = ModflowGwfsto( - model, save_flows=True, iconvert=1, ss=0.000001, sy=0.15 + model, save_flows=True, iconvert=0, ss=0.000001, sy=None, pname="sto_t" ) + sto_package.check() + model.remove_package("sto_t") + sto_package = ModflowGwfsto( + model, save_flows=True, iconvert=1, ss=0.000001, sy=0.15 + ) hfb_package = ModflowGwfhfb( model, print_input=True, diff --git a/flopy/pakbase.py b/flopy/pakbase.py index 11aee0f5a6..432381820c 100644 --- a/flopy/pakbase.py +++ b/flopy/pakbase.py @@ -413,6 +413,15 @@ def _check_storage(self, chk, storage_coeff): skip_sy_check = True else: iconvert = self.iconvert.array + inds = np.array( + [ + True if l > 0 or l < 0 else False + for l in iconvert.flatten() + ] + ) + if not inds.any(): + skip_sy_check = True + for ishape in np.ndindex(active.shape): if active[ishape]: active[ishape] = ( From 79311120d0daeb3ba1281c70faf97fcd92a1fde5 Mon Sep 17 00:00:00 2001 From: w-bonelli Date: Fri, 28 Jul 2023 20:04:38 -0400 Subject: [PATCH 016/101] feat(binaryfile): add reverse() method to HeadFile, CellBudgetFile (#1829) --- autotest/test_binaryfile.py | 270 ++++---------- autotest/test_cellbudgetfile.py | 308 +++++++++++++++ flopy/utils/binaryfile.py | 637 +++++++++++++++++++++----------- flopy/utils/datafile.py | 7 + 4 files changed, 803 insertions(+), 419 deletions(-) create mode 100644 autotest/test_cellbudgetfile.py diff --git a/autotest/test_binaryfile.py b/autotest/test_binaryfile.py index bb6795b708..d0d5925205 100644 --- a/autotest/test_binaryfile.py +++ b/autotest/test_binaryfile.py @@ -6,12 +6,14 @@ from matplotlib import pyplot as plt from matplotlib.axes import Axes +from flopy.mf6.modflow import MFSimulation from flopy.modflow import Modflow from flopy.utils import ( BinaryHeader, CellBudgetFile, HeadFile, HeadUFile, + UcnFile, Util2d, ) from flopy.utils.binaryfile import ( @@ -197,6 +199,9 @@ def test_headufile_get_ts(example_data_path): heads.get_ts([i, i + 1, i + 2]) +# precision + + def test_get_headfile_precision(example_data_path): precision = get_headfile_precision( example_data_path / "freyberg" / "freyberg.githds" @@ -249,6 +254,9 @@ def test_budgetfile_detect_precision_double(path): assert file.realtype == np.float64 +# write + + def test_write_head(function_tmpdir): file_path = function_tmpdir / "headfile" head_data = np.random.random((10, 10)) @@ -286,6 +294,9 @@ def test_write_budget(function_tmpdir): assert np.array_equal(content1, content2) +# read + + def test_binaryfile_read(function_tmpdir, freyberg_model_path): h = HeadFile(freyberg_model_path / "freyberg.githds") assert isinstance(h, HeadFile) @@ -336,212 +347,59 @@ def test_binaryfile_read_context(freyberg_model_path): assert str(e.value) == "seek of closed file", str(e.value) -def test_cellbudgetfile_read_context(example_data_path): - mf2005_model_path = example_data_path / "mf2005_test" - cbc_path = mf2005_model_path / "mnw1.gitcbc" - with CellBudgetFile(cbc_path) as v: - data = v.get_data(text="DRAINS")[0] - assert data.min() < 0, data.min() - assert not v.file.closed - assert v.file.closed - - with pytest.raises(ValueError) as e: - v.get_data(text="DRAINS") - assert str(e.value) == "seek of closed file", str(e.value) - - -def test_cellbudgetfile_read(example_data_path): - mf2005_model_path = example_data_path / "mf2005_test" - v = CellBudgetFile(mf2005_model_path / "mnw1.gitcbc") - assert isinstance(v, CellBudgetFile) - - kstpkper = v.get_kstpkper() - assert len(kstpkper) == 5, "length of kstpkper != 5" - - records = v.get_unique_record_names() - idx = 0 - for t in kstpkper: - for record in records: - t0 = v.get_data(kstpkper=t, text=record, full3D=True)[0] - t1 = v.get_data(idx=idx, text=record, full3D=True)[0] - assert np.array_equal(t0, t1), ( - f"binary budget item {record} read using kstpkper != binary " - f"budget item {record} read using idx" - ) - idx += 1 - v.close() - - -def test_cellbudgetfile_position(function_tmpdir, zonbud_model_path): - fpth = zonbud_model_path / "freyberg.gitcbc" - v = CellBudgetFile(fpth) - assert isinstance(v, CellBudgetFile) - - # starting position of data - idx = 8767 - ipos = v.get_position(idx) - ival = 50235424 - assert ipos == ival, f"position of index 8767 != {ival}" - - ipos = v.get_position(idx, header=True) - ival = 50235372 - assert ipos == ival, f"position of index 8767 header != {ival}" - - cbcd = [] - for i in range(idx, v.get_nrecords()): - cbcd.append(v.get_data(i)[0]) - v.close() - - # write the last entry as a new binary file - fin = open(fpth, "rb") - fin.seek(ipos) - length = os.path.getsize(fpth) - ipos - - buffsize = 32 - opth = str(function_tmpdir / "end.cbc") - with open(opth, "wb") as fout: - while length: - chunk = min(buffsize, length) - data = fin.read(chunk) - fout.write(data) - length -= chunk - fin.close() - - v2 = CellBudgetFile(opth, verbose=True) - - try: - v2.list_records() - except: - assert False, f"could not list records on {opth}" - - names = v2.get_unique_record_names(decode=True) - - cbcd2 = [] - for i in range(0, v2.get_nrecords()): - cbcd2.append(v2.get_data(i)[0]) - v2.close() - - for i, (d1, d2) in enumerate(zip(cbcd, cbcd2)): - msg = f"{names[i].rstrip()} data from slice is not identical" - assert np.array_equal(d1, d2), msg - - # Check error when reading empty file - fname = function_tmpdir / "empty.gitcbc" - with open(fname, "w"): - pass - with pytest.raises(ValueError): - CellBudgetFile(fname) - - -def test_cellbudgetfile_readrecord(example_data_path): - mf2005_model_path = example_data_path / "mf2005_test" - cbc_fname = mf2005_model_path / "test1tr.gitcbc" - v = CellBudgetFile(cbc_fname) - assert isinstance(v, CellBudgetFile) - - kstpkper = v.get_kstpkper() - assert len(kstpkper) == 30, "length of kstpkper != 30" - - with pytest.raises(TypeError) as e: - v.get_data() - assert str(e.value).startswith( - "get_data() missing 1 required argument" - ), str(e.exception) - - t = v.get_data(text="STREAM LEAKAGE") - assert len(t) == 30, "length of stream leakage data != 30" - assert ( - t[0].shape[0] == 36 - ), "sfr budget data does not have 36 reach entries" - - t = v.get_data(text="STREAM LEAKAGE", full3D=True) - assert t[0].shape == (1, 15, 10), ( - "3D sfr budget data does not have correct shape (1, 15,10) - " - "returned shape {}".format(t[0].shape) - ) +# reverse - for kk in kstpkper: - t = v.get_data(kstpkper=kk, text="STREAM LEAKAGE", full3D=True)[0] - assert t.shape == (1, 15, 10), ( - "3D sfr budget data for kstpkper {} " - "does not have correct shape (1, 15,10) - " - "returned shape {}".format(kk, t[0].shape) - ) - - idx = v.get_indices() - assert idx is None, "get_indices() without record did not return None" - - records = v.get_unique_record_names() - for record in records: - indices = v.get_indices(text=record.strip()) - for idx, kk in enumerate(kstpkper): - t0 = v.get_data(kstpkper=kk, text=record.strip())[0] - t1 = v.get_data(idx=indices[idx], text=record)[0] - assert np.array_equal( - t0, t1 - ), "binary budget item {0} read using kstpkper != binary budget item {0} read using idx".format( - record - ) - - # idx can be either an int or a list of ints - s9 = v.get_data(idx=9) - assert len(s9) == 1 - s09 = v.get_data(idx=[0, 9]) - assert len(s09) == 2 - assert (s09[1] == s9).all() - - v.close() - - -def test_cellbudgetfile_readrecord_waux(example_data_path): - mf2005_model_path = example_data_path / "mf2005_test" - cbc_fname = mf2005_model_path / "test1tr.gitcbc" - v = CellBudgetFile(cbc_fname) - assert isinstance(v, CellBudgetFile) - - kstpkper = v.get_kstpkper() - assert len(kstpkper) == 30, "length of kstpkper != 30" - - t = v.get_data(text="WELLS") - assert len(t) == 30, "length of well data != 30" - assert t[0].shape[0] == 10, "wel budget data does not have 10 well entries" - assert t[0].dtype.names == ("node", "q", "IFACE") - np.testing.assert_array_equal( - t[0]["node"], - [54, 55, 64, 65, 74, 75, 84, 85, 94, 95], - ) - np.testing.assert_array_equal(t[0]["q"], np.repeat(np.float32(-10.0), 10)) - np.testing.assert_array_equal( - t[0]["IFACE"], - np.array([1, 2, 3, 4, 5, 6, 0, 0, 0, 0], np.float32), - ) - t = v.get_data(text="WELLS", full3D=True) - assert t[0].shape == (1, 15, 10), ( - "3D wel budget data does not have correct shape (1, 15,10) - " - "returned shape {}".format(t[0].shape) +def test_headfile_reverse_mf6(example_data_path, function_tmpdir): + # load simulation and extract tdis + sim_name = "test006_gwf3" + sim = MFSimulation.load( + sim_name=sim_name, sim_ws=example_data_path / "mf6" / sim_name ) - - for kk in kstpkper: - t = v.get_data(kstpkper=kk, text="wells", full3D=True)[0] - assert t.shape == (1, 15, 10), ( - "3D wel budget data for kstpkper {} " - "does not have correct shape (1, 15,10) - " - "returned shape {}".format(kk, t[0].shape) - ) - - idx = v.get_indices() - assert idx is None, "get_indices() without record did not return None" - - records = v.get_unique_record_names() - for record in records: - indices = v.get_indices(text=record.strip()) - for idx, kk in enumerate(kstpkper): - t0 = v.get_data(kstpkper=kk, text=record.strip())[0] - t1 = v.get_data(idx=indices[idx], text=record)[0] - assert np.array_equal( - t0, t1 - ), "binary budget item {0} read using kstpkper != binary budget item {0} read using idx".format( - record - ) - v.close() + tdis = sim.get_package("tdis") + + # load cell budget file, providing tdis as kwarg + model_path = example_data_path / "mf6" / sim_name + file_stem = "flow_adj" + file_path = model_path / "expected_output" / f"{file_stem}.hds" + f = HeadFile(file_path, tdis=tdis) + assert isinstance(f, HeadFile) + + # reverse the file + rf_name = f"{file_stem}_rev.hds" + f.reverse(filename=function_tmpdir / rf_name) + rf = HeadFile(function_tmpdir / rf_name) + assert isinstance(rf, HeadFile) + + # check that data from both files have the same shape + f_data = f.get_alldata() + f_shape = f_data.shape + rf_data = rf.get_alldata() + rf_shape = rf_data.shape + assert f_shape == rf_shape + + # check number of records + nrecords = f.get_nrecords() + assert nrecords == rf.get_nrecords() + + # check that the data are reversed + for idx in range(nrecords - 1, -1, -1): + # check headers + f_header = list(f.recordarray[nrecords - idx - 1]) + rf_header = list(rf.recordarray[idx]) + f_totim = f_header.pop(9) # todo check totim + rf_totim = rf_header.pop(9) + assert f_header == rf_header + assert f_header == rf_header + + # check data + f_data = f.get_data(idx=idx)[0] + rf_data = rf.get_data(idx=nrecords - idx - 1)[0] + assert f_data.shape == rf_data.shape + if f_data.ndim == 1: + for row in range(len(f_data)): + f_datum = f_data[row] + rf_datum = rf_data[row] + assert f_datum == rf_datum + else: + assert np.array_equal(f_data[0][0], rf_data[0][0]) diff --git a/autotest/test_cellbudgetfile.py b/autotest/test_cellbudgetfile.py new file mode 100644 index 0000000000..175df6037b --- /dev/null +++ b/autotest/test_cellbudgetfile.py @@ -0,0 +1,308 @@ +import os + +import numpy as np +import pytest + +from flopy.mf6.modflow.mfsimulation import MFSimulation +from flopy.utils.binaryfile import CellBudgetFile + + +@pytest.fixture +def zonbud_model_path(example_data_path): + return example_data_path / "zonbud_examples" + + +def test_cellbudgetfile_position(function_tmpdir, zonbud_model_path): + fpth = zonbud_model_path / "freyberg.gitcbc" + v = CellBudgetFile(fpth) + assert isinstance(v, CellBudgetFile) + + # starting position of data + idx = 8767 + ipos = v.get_position(idx) + ival = 50235424 + assert ipos == ival, f"position of index 8767 != {ival}" + + ipos = v.get_position(idx, header=True) + ival = 50235372 + assert ipos == ival, f"position of index 8767 header != {ival}" + + cbcd = [] + for i in range(idx, v.get_nrecords()): + cbcd.append(v.get_data(i)[0]) + v.close() + + # write the last entry as a new binary file + fin = open(fpth, "rb") + fin.seek(ipos) + length = os.path.getsize(fpth) - ipos + + buffsize = 32 + opth = str(function_tmpdir / "end.cbc") + with open(opth, "wb") as fout: + while length: + chunk = min(buffsize, length) + data = fin.read(chunk) + fout.write(data) + length -= chunk + fin.close() + + v2 = CellBudgetFile(opth, verbose=True) + + try: + v2.list_records() + except: + assert False, f"could not list records on {opth}" + + names = v2.get_unique_record_names(decode=True) + + cbcd2 = [] + for i in range(0, v2.get_nrecords()): + cbcd2.append(v2.get_data(i)[0]) + v2.close() + + for i, (d1, d2) in enumerate(zip(cbcd, cbcd2)): + msg = f"{names[i].rstrip()} data from slice is not identical" + assert np.array_equal(d1, d2), msg + + # Check error when reading empty file + fname = function_tmpdir / "empty.gitcbc" + with open(fname, "w"): + pass + with pytest.raises(ValueError): + CellBudgetFile(fname) + + +# read context + + +def test_cellbudgetfile_read_context(example_data_path): + mf2005_model_path = example_data_path / "mf2005_test" + cbc_path = mf2005_model_path / "mnw1.gitcbc" + with CellBudgetFile(cbc_path) as v: + data = v.get_data(text="DRAINS")[0] + assert data.min() < 0, data.min() + assert not v.file.closed + assert v.file.closed + + with pytest.raises(ValueError) as e: + v.get_data(text="DRAINS") + assert str(e.value) == "seek of closed file", str(e.value) + + +# read + + +def test_cellbudgetfile_read(example_data_path): + mf2005_model_path = example_data_path / "mf2005_test" + v = CellBudgetFile(mf2005_model_path / "mnw1.gitcbc") + assert isinstance(v, CellBudgetFile) + + kstpkper = v.get_kstpkper() + assert len(kstpkper) == 5, "length of kstpkper != 5" + + records = v.get_unique_record_names() + idx = 0 + for t in kstpkper: + for record in records: + t0 = v.get_data(kstpkper=t, text=record, full3D=True)[0] + t1 = v.get_data(idx=idx, text=record, full3D=True)[0] + assert np.array_equal(t0, t1), ( + f"binary budget item {record} read using kstpkper != binary " + f"budget item {record} read using idx" + ) + idx += 1 + v.close() + + +# readrecord + + +def test_cellbudgetfile_readrecord(example_data_path): + mf2005_model_path = example_data_path / "mf2005_test" + cbc_fname = mf2005_model_path / "test1tr.gitcbc" + v = CellBudgetFile(cbc_fname) + assert isinstance(v, CellBudgetFile) + + kstpkper = v.get_kstpkper() + assert len(kstpkper) == 30, "length of kstpkper != 30" + + with pytest.raises(TypeError) as e: + v.get_data() + assert str(e.value).startswith( + "get_data() missing 1 required argument" + ), str(e.exception) + + t = v.get_data(text="STREAM LEAKAGE") + assert len(t) == 30, "length of stream leakage data != 30" + assert ( + t[0].shape[0] == 36 + ), "sfr budget data does not have 36 reach entries" + + t = v.get_data(text="STREAM LEAKAGE", full3D=True) + assert t[0].shape == (1, 15, 10), ( + "3D sfr budget data does not have correct shape (1, 15,10) - " + "returned shape {}".format(t[0].shape) + ) + + for kk in kstpkper: + t = v.get_data(kstpkper=kk, text="STREAM LEAKAGE", full3D=True)[0] + assert t.shape == (1, 15, 10), ( + "3D sfr budget data for kstpkper {} " + "does not have correct shape (1, 15,10) - " + "returned shape {}".format(kk, t[0].shape) + ) + + idx = v.get_indices() + assert idx is None, "get_indices() without record did not return None" + + records = v.get_unique_record_names() + for record in records: + indices = v.get_indices(text=record.strip()) + for idx, kk in enumerate(kstpkper): + t0 = v.get_data(kstpkper=kk, text=record.strip())[0] + t1 = v.get_data(idx=indices[idx], text=record)[0] + assert np.array_equal( + t0, t1 + ), "binary budget item {0} read using kstpkper != binary budget item {0} read using idx".format( + record + ) + + # idx can be either an int or a list of ints + s9 = v.get_data(idx=9) + assert len(s9) == 1 + s09 = v.get_data(idx=[0, 9]) + assert len(s09) == 2 + assert (s09[1] == s9).all() + + v.close() + + +def test_cellbudgetfile_readrecord_waux(example_data_path): + mf2005_model_path = example_data_path / "mf2005_test" + cbc_fname = mf2005_model_path / "test1tr.gitcbc" + v = CellBudgetFile(cbc_fname) + assert isinstance(v, CellBudgetFile) + + kstpkper = v.get_kstpkper() + assert len(kstpkper) == 30, "length of kstpkper != 30" + + t = v.get_data(text="WELLS") + assert len(t) == 30, "length of well data != 30" + assert t[0].shape[0] == 10, "wel budget data does not have 10 well entries" + assert t[0].dtype.names == ("node", "q", "IFACE") + np.testing.assert_array_equal( + t[0]["node"], + [54, 55, 64, 65, 74, 75, 84, 85, 94, 95], + ) + np.testing.assert_array_equal(t[0]["q"], np.repeat(np.float32(-10.0), 10)) + np.testing.assert_array_equal( + t[0]["IFACE"], + np.array([1, 2, 3, 4, 5, 6, 0, 0, 0, 0], np.float32), + ) + + t = v.get_data(text="WELLS", full3D=True) + assert t[0].shape == (1, 15, 10), ( + "3D wel budget data does not have correct shape (1, 15,10) - " + "returned shape {}".format(t[0].shape) + ) + + for kk in kstpkper: + t = v.get_data(kstpkper=kk, text="wells", full3D=True)[0] + assert t.shape == (1, 15, 10), ( + "3D wel budget data for kstpkper {} " + "does not have correct shape (1, 15,10) - " + "returned shape {}".format(kk, t[0].shape) + ) + + idx = v.get_indices() + assert idx is None, "get_indices() without record did not return None" + + records = v.get_unique_record_names() + for record in records: + indices = v.get_indices(text=record.strip()) + for idx, kk in enumerate(kstpkper): + t0 = v.get_data(kstpkper=kk, text=record.strip())[0] + t1 = v.get_data(idx=indices[idx], text=record)[0] + assert np.array_equal( + t0, t1 + ), "binary budget item {0} read using kstpkper != binary budget item {0} read using idx".format( + record + ) + v.close() + + +# reverse + + +@pytest.mark.skip( + reason="failing, need to modify CellBudgetFile.reverse to support mf2005?" +) +def test_cellbudgetfile_reverse_mf2005(example_data_path, function_tmpdir): + sim_name = "test1tr" + + # load simulation and extract tdis + sim = MFSimulation.load( + sim_name=sim_name, sim_ws=example_data_path / "mf2005_test" + ) + tdis = sim.get_package("tdis") + + mf2005_model_path = example_data_path / sim_name + cbc_fname = mf2005_model_path / f"{sim_name}.gitcbc" + f = CellBudgetFile(cbc_fname, tdis=tdis) + assert isinstance(f, CellBudgetFile) + + rf_name = "test1tr_rev.gitcbc" + f.reverse(function_tmpdir / rf_name) + rf = CellBudgetFile(function_tmpdir / rf_name) + assert isinstance(rf, CellBudgetFile) + + +def test_cellbudgetfile_reverse_mf6(example_data_path, function_tmpdir): + # load simulation and extract tdis + sim_name = "test006_gwf3" + sim = MFSimulation.load( + sim_name=sim_name, sim_ws=example_data_path / "mf6" / sim_name + ) + tdis = sim.get_package("tdis") + + # load cell budget file, providing tdis as kwarg + model_path = example_data_path / "mf6" / sim_name + file_stem = "flow_adj" + file_path = model_path / "expected_output" / f"{file_stem}.cbc" + f = CellBudgetFile(file_path, tdis=tdis) + assert isinstance(f, CellBudgetFile) + + # reverse the file + rf_name = f"{file_stem}_rev.cbc" + f.reverse(filename=function_tmpdir / rf_name) + rf = CellBudgetFile(function_tmpdir / rf_name) + assert isinstance(rf, CellBudgetFile) + + # check that both files have the same number of records + nrecords = f.get_nrecords() + assert nrecords == rf.get_nrecords() + + # check data were reversed + for idx in range(nrecords - 1, -1, -1): + # check headers + f_header = list(f.recordarray[nrecords - idx - 1]) + rf_header = list(rf.recordarray[idx]) + f_totim = f_header.pop(9) # todo check totim + rf_totim = rf_header.pop(9) + assert f_header == rf_header + + # check data + f_data = f.get_data(idx=idx)[0] + rf_data = rf.get_data(idx=nrecords - idx - 1)[0] + assert f_data.shape == rf_data.shape + if f_data.ndim == 1: + for row in range(len(f_data)): + f_datum = f_data[row] + rf_datum = rf_data[row] + # flows should be negated + rf_datum[2] = -rf_datum[2] + assert f_datum == rf_datum + else: + # flows should be negated + assert np.array_equal(f_data[0][0], -rf_data[0][0]) diff --git a/flopy/utils/binaryfile.py b/flopy/utils/binaryfile.py index db7ffaf9fb..07445cbcbe 100644 --- a/flopy/utils/binaryfile.py +++ b/flopy/utils/binaryfile.py @@ -11,7 +11,7 @@ import os import warnings from pathlib import Path -from typing import Union +from typing import Optional, Union import numpy as np @@ -165,16 +165,14 @@ def write_budget( class BinaryHeader(Header): """ - The binary_header class is a class to create headers for MODFLOW - binary files. + Represents data headers for binary output files. Parameters ---------- bintype : str - is the type of file being opened (head and ucn file currently - supported) + Type of file being opened. Accepted values are 'head' and 'ucn'. precision : str - is the precision of the floating point data in the file + Precision of floating point data in the file. """ @@ -423,9 +421,16 @@ def get_headfile_precision(filename: Union[str, os.PathLike]): class BinaryLayerFile(LayerFile): """ - The BinaryLayerFile class is the super class from which specific derived - classes are formed. This class should not be instantiated directly + The BinaryLayerFile class is a parent class from which concrete + classes inherit. This class should not be instantiated directly. + Notes + ----- + + The BinaryLayerFile class is built on a record array consisting of + headers, which are record arrays of the modflow header information + (kstp, kper, pertim, totim, text, nrow, ncol, ilay), and long ints + pointing to the 1st byte of data for the corresponding data arrays. """ def __init__( @@ -578,39 +583,25 @@ def get_ts(self, idx): class HeadFile(BinaryLayerFile): """ - HeadFile Class. + The HeadFile class provides simple ways to retrieve and manipulate + 2D or 3D head arrays, or time series arrays for one or more cells, + from a binary head output file. A utility method is also provided + to reverse the order of head data, for use with particle tracking + simulations in which particles are tracked backwards in time from + terminating to release locations (e.g., to compute capture zones). Parameters ---------- filename : str or PathLike - Path of the concentration file + Path of the head file. text : string - Name of the text string in the head file. Default is 'head' + Name of the text string in the head file. Default is 'head'. precision : string - 'auto', 'single' or 'double'. Default is 'auto'. + Precision of floating point head data in the value. Accepted + values are 'auto', 'single' or 'double'. Default is 'auto', + which enables automatic detection of precision. verbose : bool - Write information to the screen. Default is False. - - Attributes - ---------- - - Methods - ------- - - See Also - -------- - - Notes - ----- - The HeadFile class provides simple ways to retrieve 2d and 3d - head arrays from a MODFLOW binary head file and time series - arrays for one or more cells. - - The BinaryLayerFile class is built on a record array consisting of - headers, which are record arrays of the modflow header information - (kstp, kper, pertim, totim, text, nrow, ncol, ilay) - and long integers, which are pointers to first bytes of data for - the corresponding data array. + Toggle logging output. Default is False. Examples -------- @@ -624,7 +615,6 @@ class HeadFile(BinaryLayerFile): >>> ddnobj.list_records() >>> rec = ddnobj.get_data(totim=100.) - """ def __init__( @@ -647,6 +637,88 @@ def __init__( ) super().__init__(filename, precision, verbose, kwargs) + def reverse(self, filename: Optional[os.PathLike] = None): + """ + Write a new binary head file with the records in reverse order. + If a new filename is not provided, or if the filename is the same + as the existing filename, the file will be overwritten and data + reloaded from the rewritten/reversed file. + + Parameters + ---------- + + filename : str or PathLike + Path of the new reversed binary file to create. + """ + + filename = ( + Path(filename).expanduser().absolute() + if filename + else self.filename + ) + + # header array formats + dt = np.dtype( + [ + ("kstp", np.int32), + ("kper", np.int32), + ("pertim", np.float64), + ("totim", np.float64), + ("text", "S16"), + ("ncol", np.int32), + ("nrow", np.int32), + ("ilay", np.int32), + ] + ) + + # make sure we have tdis + if self.tdis is None or not any(self.tdis.perioddata.get_data()): + raise ValueError("tdis mu/st be known to reverse head file") + + # extract period data + pd = self.tdis.perioddata.get_data() + + # get maximum period number and total simulation time + kpermx = len(pd) - 1 + tsimtotal = 0.0 + for tpd in pd: + tsimtotal += tpd[0] + + # get total number of records + nrecords = self.recordarray.shape[0] + + # open backward file + with open(filename, "wb") as fbin: + # loop over head file records in reverse order + for idx in range(nrecords - 1, -1, -1): + # load header array + header = self.recordarray[idx].copy() + + # reverse kstp and kper in the header array + (kstp, kper) = (header["kstp"] - 1, header["kper"] - 1) + kstpmx = pd[kper][1] - 1 + kstpb = kstpmx - kstp + kperb = kpermx - kper + (header["kstp"], header["kper"]) = (kstpb + 1, kperb + 1) + + # reverse totim and pertim in the header array + header["totim"] = tsimtotal - header["totim"] + perlen = pd[kper][0] + header["pertim"] = perlen - header["pertim"] + + # write header information + h = np.array(header, dtype=dt) + h.tofile(fbin) + + # load and write data + data = self.get_data(idx=idx)[0][0] + data = np.array(data, dtype=np.float64) + data.tofile(fbin) + + # if we rewrote the original file, reinitialize + if filename == self.filename: + super().__init__(self.filename, self.precision, self.verbose, {}) + class UcnFile(BinaryLayerFile): """ @@ -716,34 +788,190 @@ def __init__( return -class BudgetIndexError(Exception): - pass - - -class CellBudgetFile: +class HeadUFile(BinaryLayerFile): """ - CellBudgetFile Class. + The HeadUFile class provides simple ways to retrieve a list of + head arrays from a MODFLOW-USG binary head file and time series + arrays for one or more cells. Parameters ---------- filename : str or PathLike - Path of the cell budget file + Path of the head file + text : string + Name of the text string in the head file. Default is 'headu'. precision : string - 'single' or 'double'. Default is 'single'. + Precision of the floating point head data in the file. Accepted + values are 'auto', 'single' or 'double'. Default is 'auto', which + enables precision to be automatically detected. verbose : bool - Write information to the screen. Default is False. + Toggle logging output. Default is False. - Attributes - ---------- + Notes + ----- - Methods - ------- + The BinaryLayerFile class is built on a record array consisting of + headers, which are record arrays of the modflow header information + (kstp, kper, pertim, totim, text, nrow, ncol, ilay), and long ints + pointing to the 1st byte of data for the corresponding data arrays. + This class overrides methods in the parent class so that the proper + sized arrays are created: for unstructured grids, nrow and ncol are + the starting and ending node numbers for layer, ilay. - See Also + When the get_data method is called for this class, a list of + one-dimensional arrays will be returned, where each array is the head + array for a layer. If the heads for a layer were not saved, then + None will be returned for that layer. + + Examples -------- - Notes - ----- + >>> import flopy.utils.binaryfile as bf + >>> hdobj = bf.HeadUFile('model.hds') + >>> hdobj.list_records() + >>> usgheads = hdobj.get_data(kstpkper=(1, 50)) + + """ + + def __init__( + self, + filename: Union[str, os.PathLike], + text="headu", + precision="auto", + verbose=False, + **kwargs, + ): + """ + Class constructor + """ + self.text = text.encode() + if precision == "auto": + precision = get_headfile_precision(filename) + if precision == "unknown": + s = f"Error. Precision could not be determined for {filename}" + print(s) + raise Exception() + self.header_dtype = BinaryHeader.set_dtype( + bintype="Head", precision=precision + ) + super().__init__(filename, precision, verbose, kwargs) + + def _get_data_array(self, totim=0.0): + """ + Get a list of 1D arrays for the + specified kstp and kper value or totim value. + + """ + + if totim >= 0.0: + keyindices = np.where(self.recordarray["totim"] == totim)[0] + if len(keyindices) == 0: + msg = f"totim value ({totim}) not found in file..." + raise Exception(msg) + else: + raise Exception("Data not found...") + + # fill a list of 1d arrays with heads from binary file + data = self.nlay * [None] + for idx in keyindices: + ipos = self.iposarray[idx] + ilay = self.recordarray["ilay"][idx] + nstrt = self.recordarray["ncol"][idx] + nend = self.recordarray["nrow"][idx] + npl = nend - nstrt + 1 + if self.verbose: + print(f"Byte position in file: {ipos} for layer {ilay}") + self.file.seek(ipos, 0) + data[ilay - 1] = binaryread(self.file, self.realtype, shape=(npl,)) + return data + + def get_databytes(self, header): + """ + + Parameters + ---------- + header : datafile.Header + header object + + Returns + ------- + databytes : int + size of the data array, in bytes, following the header + + """ + # unstructured head files contain node starting and ending indices + # for each layer + nstrt = np.int64(header["ncol"]) + nend = np.int64(header["nrow"]) + npl = nend - nstrt + 1 + return npl * np.int64(self.realtype(1).nbytes) + + def get_ts(self, idx): + """ + Get a time series from the binary HeadUFile + + Parameters + ---------- + idx : int or list of ints + idx can be nodenumber or it can be a list in the form + [nodenumber, nodenumber, ...]. The nodenumber, + values must be zero based. + + Returns + ---------- + out : numpy array + Array has size (ntimes, ncells + 1). The first column in the + data array will contain time (totim). + + """ + times = self.get_times() + data = self.get_data(totim=times[0]) + layers = len(data) + ncpl = [len(data[l]) for l in range(layers)] + result = [] + + if isinstance(idx, int): + layer, nn = get_lni(ncpl, [idx])[0] + for i, time in enumerate(times): + data = self.get_data(totim=time) + value = data[layer][nn] + result.append([time, value]) + elif isinstance(idx, list) and all(isinstance(x, int) for x in idx): + for i, time in enumerate(times): + data = self.get_data(totim=time) + row = [time] + lni = get_lni(ncpl, idx) + for layer, nn in lni: + value = data[layer][nn] + row += [value] + result.append(row) + else: + raise ValueError("idx must be an integer or a list of integers") + + return np.array(result) + + +class BudgetIndexError(Exception): + pass + + +class CellBudgetFile: + """ + The CellBudgetFile class provides convenient ways to retrieve and + manipulate budget data from a binary cell budget file. A utility + method is also provided to reverse the budget records for particle + tracking simulations in which particles are tracked backwards from + terminating to release locations (e.g., to compute capture zones). + + Parameters + ---------- + filename : str or PathLike + Path of the cell budget file. + precision : string + Precision of floating point budget data in the file. Accepted + values are 'single' or 'double'. Default is 'single'. + verbose : bool + Toggle logging output. Default is False. Examples -------- @@ -796,6 +1024,8 @@ def __init__( if "dis" in kwargs.keys(): self.dis = kwargs.pop("dis") self.modelgrid = self.dis.parent.modelgrid + if "tdis" in kwargs.keys(): + self.tdis = kwargs.pop("tdis") if "sr" in kwargs.keys(): from ..discretization import StructuredGrid, UnstructuredGrid @@ -1969,177 +2199,158 @@ def close(self): Close the file handle """ self.file.close() - return - -class HeadUFile(BinaryLayerFile): - """ - Unstructured MODFLOW-USG HeadUFile Class. - - Parameters - ---------- - filename : string - Name of the concentration file - text : string - Name of the text string in the head file. Default is 'headu' - precision : string - 'auto', 'single' or 'double'. Default is 'auto'. - verbose : bool - Write information to the screen. Default is False. - - Attributes - ---------- - - Methods - ------- - - See Also - -------- - - Notes - ----- - The HeadUFile class provides simple ways to retrieve a list of - head arrays from a MODFLOW-USG binary head file and time series - arrays for one or more cells. - - The BinaryLayerFile class is built on a record array consisting of - headers, which are record arrays of the modflow header information - (kstp, kper, pertim, totim, text, nrow, ncol, ilay) - and long integers, which are pointers to first bytes of data for - the corresponding data array. For unstructured grids, nrow and ncol - are the starting and ending node numbers for layer, ilay. This class - overrides methods in the parent class so that the proper sized arrays - are created. - - When the get_data method is called for this class, a list of - one-dimensional arrays will be returned, where each array is the head - array for a layer. If the heads for a layer were not saved, then - None will be returned for that layer. - - Examples - -------- - - >>> import flopy.utils.binaryfile as bf - >>> hdobj = bf.HeadUFile('model.hds') - >>> hdobj.list_records() - >>> usgheads = hdobj.get_data(kstpkper=(1, 50)) - - - """ - - def __init__( - self, - filename: Union[str, os.PathLike], - text="headu", - precision="auto", - verbose=False, - **kwargs, - ): - """ - Class constructor - """ - self.text = text.encode() - if precision == "auto": - precision = get_headfile_precision(filename) - if precision == "unknown": - s = f"Error. Precision could not be determined for {filename}" - print(s) - raise Exception() - self.header_dtype = BinaryHeader.set_dtype( - bintype="Head", precision=precision - ) - super().__init__(filename, precision, verbose, kwargs) - - def _get_data_array(self, totim=0.0): - """ - Get a list of 1D arrays for the - specified kstp and kper value or totim value. - - """ - - if totim >= 0.0: - keyindices = np.where(self.recordarray["totim"] == totim)[0] - if len(keyindices) == 0: - msg = f"totim value ({totim}) not found in file..." - raise Exception(msg) - else: - raise Exception("Data not found...") - - # fill a list of 1d arrays with heads from binary file - data = self.nlay * [None] - for idx in keyindices: - ipos = self.iposarray[idx] - ilay = self.recordarray["ilay"][idx] - nstrt = self.recordarray["ncol"][idx] - nend = self.recordarray["nrow"][idx] - npl = nend - nstrt + 1 - if self.verbose: - print(f"Byte position in file: {ipos} for layer {ilay}") - self.file.seek(ipos, 0) - data[ilay - 1] = binaryread(self.file, self.realtype, shape=(npl,)) - return data - - def get_databytes(self, header): + def reverse(self, filename: Optional[os.PathLike] = None): """ + Write a binary cell budget file with the records in reverse order. + If a new filename is not provided, or if the filename is the same + as the existing filename, the file will be overwritten and data + reloaded from the rewritten/reversed file. Parameters ---------- - header : datafile.Header - header object - - Returns - ------- - databytes : int - size of the data array, in bytes, following the header - - """ - # unstructured head files contain node starting and ending indices - # for each layer - nstrt = np.int64(header["ncol"]) - nend = np.int64(header["nrow"]) - npl = nend - nstrt + 1 - return npl * np.int64(self.realtype(1).nbytes) - def get_ts(self, idx): + filename : str or PathLike, optional + Path of the new reversed binary cell budget file to create. """ - Get a time series from the binary HeadUFile - Parameters - ---------- - idx : int or list of ints - idx can be nodenumber or it can be a list in the form - [nodenumber, nodenumber, ...]. The nodenumber, - values must be zero based. + filename = ( + Path(filename).expanduser().absolute() + if filename + else self.filename + ) - Returns - ---------- - out : numpy array - Array has size (ntimes, ncells + 1). The first column in the - data array will contain time (totim). + # header array formats + dt1 = np.dtype( + [ + ("kstp", np.int32), + ("kper", np.int32), + ("text", "S16"), + ("ndim1", np.int32), + ("ndim2", np.int32), + ("ndim3", np.int32), + ("imeth", np.int32), + ("delt", np.float64), + ("pertim", np.float64), + ("totim", np.float64), + ] + ) + dt2 = np.dtype( + [ + ("text1id1", "S16"), + ("text1id2", "S16"), + ("text2id1", "S16"), + ("text2id2", "S16"), + ] + ) - """ - times = self.get_times() - data = self.get_data(totim=times[0]) - layers = len(data) - ncpl = [len(data[l]) for l in range(layers)] - result = [] + # make sure we have tdis + if self.tdis is None or not any(self.tdis.perioddata.get_data()): + raise ValueError( + "tdis must be known to reverse a cell budget file" + ) - if isinstance(idx, int): - layer, nn = get_lni(ncpl, [idx])[0] - for i, time in enumerate(times): - data = self.get_data(totim=time) - value = data[layer][nn] - result.append([time, value]) - elif isinstance(idx, list) and all(isinstance(x, int) for x in idx): - for i, time in enumerate(times): - data = self.get_data(totim=time) - row = [time] - lni = get_lni(ncpl, idx) - for layer, nn in lni: - value = data[layer][nn] - row += [value] - result.append(row) - else: - raise ValueError("idx must be an integer or a list of integers") + # extract perioddata + pd = self.tdis.perioddata.get_data() + + # get maximum period number and total simulation time + nper = len(pd) + kpermx = nper - 1 + tsimtotal = 0.0 + for tpd in pd: + tsimtotal += tpd[0] + + # get number of records + nrecords = self.get_nrecords() + + # open backward budget file + with open(filename, "wb") as fbin: + # loop over budget file records in reverse order + for idx in range(nrecords - 1, -1, -1): + # load header array + header = self.recordarray[idx] + + # reverse kstp and kper in the header array + (kstp, kper) = (header["kstp"] - 1, header["kper"] - 1) + kstpmx = pd[kper][1] - 1 + kstpb = kstpmx - kstp + kperb = kpermx - kper + (header["kstp"], header["kper"]) = (kstpb + 1, kperb + 1) + + # reverse totim and pertim in the header array + header["totim"] = tsimtotal - header["totim"] + perlen = pd[kper][0] + header["pertim"] = perlen - header["pertim"] + + # Write main header information to backward budget file + h = header[ + [ + "kstp", + "kper", + "text", + "ncol", + "nrow", + "nlay", + "imeth", + "delt", + "pertim", + "totim", + ] + ] + # Note: much of the code below is based on binary_file_writer.py + h = np.array(h, dtype=dt1) + h.tofile(fbin) + if header["imeth"] == 6: + # Write additional header information to the backward budget file + h = header[ + [ + "modelnam", + "paknam", + "modelnam2", + "paknam2", + ] + ] + h = np.array(h, dtype=dt2) + h.tofile(fbin) + # Load data + data = self.get_data(idx)[0] + data = np.array(data) + # Negate flows + data["q"] = -data["q"] + # Write ndat (number of floating point columns) + colnames = data.dtype.names + ndat = len(colnames) - 2 + dt = np.dtype([("ndat", np.int32)]) + h = np.array([(ndat,)], dtype=dt) + h.tofile(fbin) + # Write auxiliary column names + naux = ndat - 1 + if naux > 0: + auxtxt = [ + "{:16}".format(colname) for colname in colnames[3:] + ] + auxtxt = tuple(auxtxt) + dt = np.dtype( + [(colname, "S16") for colname in colnames[3:]] + ) + h = np.array(auxtxt, dtype=dt) + h.tofile(fbin) + # Write nlist + nlist = data.shape[0] + dt = np.dtype([("nlist", np.int32)]) + h = np.array([(nlist,)], dtype=dt) + h.tofile(fbin) + elif header["imeth"] == 1: + # Load data + data = self.get_data(idx)[0][0][0] + data = np.array(data, dtype=np.float64) + # Negate flows + data = -data + else: + raise ValueError("not expecting imeth " + header["imeth"]) + # Write data + data.tofile(fbin) - return np.array(result) + # if we rewrote the original file, reinitialize + if filename == self.filename: + self.__init__(self.filename, self.precision, self.verbose) diff --git a/flopy/utils/datafile.py b/flopy/utils/datafile.py index 87fb4d9d98..81b84901cc 100644 --- a/flopy/utils/datafile.py +++ b/flopy/utils/datafile.py @@ -196,6 +196,8 @@ def __init__( if "dis" in kwargs.keys(): self.dis = kwargs.pop("dis") self.mg = self.dis.parent.modelgrid + if "tdis" in kwargs.keys(): + self.tdis = kwargs.pop("tdis") if "modelgrid" in kwargs.keys(): self.mg = kwargs.pop("modelgrid") if len(kwargs.keys()) > 0: @@ -422,6 +424,11 @@ def list_records(self): print(header) return + def get_nrecords(self): + if isinstance(self.recordarray, np.recarray): + return self.recordarray.shape[0] + return 0 + def _get_data_array(self, totim=0): """ Get the three dimensional data array for the From 8c3d7dbbaa753893d746965f9ecc48dc68b274cd Mon Sep 17 00:00:00 2001 From: w-bonelli Date: Tue, 1 Aug 2023 11:13:12 -0400 Subject: [PATCH 017/101] refactor: require pandas>=2.0.0 as core dependency (#1887) --- .docs/Notebooks/sfrpackage_example.py | 1 - autotest/test_export.py | 10 ++++----- autotest/test_grid.py | 3 +-- autotest/test_hydmodfile.py | 32 ++++++++++----------------- autotest/test_listbudget.py | 13 +++-------- autotest/test_mnw.py | 3 +-- autotest/test_mp5.py | 3 +-- autotest/test_mp6.py | 14 ++++-------- autotest/test_mp7.py | 3 --- autotest/test_plot.py | 2 -- autotest/test_sfr.py | 23 +++++++++---------- autotest/test_str.py | 1 - autotest/test_swr_binaryread.py | 28 ++++++++++------------- autotest/test_util_2d_and_3d.py | 5 +---- autotest/test_zonbud_utility.py | 8 +------ docs/flopy_method_dependencies.md | 11 --------- flopy/export/metadata.py | 2 +- flopy/mf6/utils/binaryfile_utils.py | 6 +---- flopy/mf6/utils/mfobservation.py | 5 +---- flopy/modflow/mfsfr2.py | 4 +--- flopy/modpath/mp6sim.py | 11 ++++----- flopy/utils/flopy_io.py | 6 ++--- flopy/utils/mflistfile.py | 6 +---- flopy/utils/mtlistfile.py | 11 +-------- flopy/utils/observationfile.py | 6 +---- flopy/utils/sfroutputfile.py | 12 ++++------ flopy/utils/util_list.py | 9 +------- flopy/utils/utl_import.py | 1 - flopy/utils/zonbud.py | 9 +------- pyproject.toml | 2 +- 30 files changed, 71 insertions(+), 179 deletions(-) diff --git a/.docs/Notebooks/sfrpackage_example.py b/.docs/Notebooks/sfrpackage_example.py index 292f7d6fec..7aa328a13d 100644 --- a/.docs/Notebooks/sfrpackage_example.py +++ b/.docs/Notebooks/sfrpackage_example.py @@ -204,7 +204,6 @@ raise ValueError("Failed to run.") # ### Load SFR formated water balance output into pandas dataframe using the `SfrFile` class -# * requires the **pandas** library sfr_outfile = os.path.join( "..", "..", "examples", "data", "sfr_examples", "test1ss.flw" diff --git a/autotest/test_export.py b/autotest/test_export.py index a21c060c11..9f4eac4d32 100644 --- a/autotest/test_export.py +++ b/autotest/test_export.py @@ -160,7 +160,7 @@ def test_output_helper_shapefile_export( ) -@requires_pkg("pandas", "shapefile") +@requires_pkg("shapefile") @pytest.mark.slow def test_freyberg_export(function_tmpdir, example_data_path): # steady state @@ -254,7 +254,7 @@ def test_freyberg_export(function_tmpdir, example_data_path): assert part.read_text() == wkt -@requires_pkg("pandas", "shapefile") +@requires_pkg("shapefile") @pytest.mark.parametrize("missing_arrays", [True, False]) @pytest.mark.slow def test_disu_export(function_tmpdir, missing_arrays): @@ -485,7 +485,7 @@ def test_shapefile_ibound(function_tmpdir, example_data_path): shape.close() -@requires_pkg("pandas", "shapefile") +@requires_pkg("shapefile") @pytest.mark.slow @pytest.mark.parametrize("namfile", namfiles()) def test_shapefile(function_tmpdir, namfile): @@ -510,7 +510,7 @@ def test_shapefile(function_tmpdir, namfile): ), f"wrong number of records in shapefile {fnc_name}" -@requires_pkg("pandas", "shapefile") +@requires_pkg("shapefile") @pytest.mark.slow @pytest.mark.parametrize("namfile", namfiles()) def test_shapefile_export_modelgrid_override(function_tmpdir, namfile): @@ -1446,7 +1446,7 @@ def test_vtk_vertex(function_tmpdir, example_data_path): @requires_exe("mf2005") -@requires_pkg("pandas", "vtk") +@requires_pkg("vtk") def test_vtk_pathline(function_tmpdir, example_data_path): from vtkmodules.vtkIOLegacy import vtkUnstructuredGridReader diff --git a/autotest/test_grid.py b/autotest/test_grid.py index 246f9e48b9..c3dafe5557 100644 --- a/autotest/test_grid.py +++ b/autotest/test_grid.py @@ -1,5 +1,5 @@ -import re import os +import re import warnings from warnings import warn @@ -22,7 +22,6 @@ from flopy.utils.triangle import Triangle from flopy.utils.voronoi import VoronoiGrid - HAS_PYPROJ = has_pkg("pyproj") if HAS_PYPROJ: import pyproj diff --git a/autotest/test_hydmodfile.py b/autotest/test_hydmodfile.py index 8e459a8241..76b70f1bb2 100644 --- a/autotest/test_hydmodfile.py +++ b/autotest/test_hydmodfile.py @@ -1,6 +1,7 @@ import os import numpy as np +import pandas as pd import pytest from modflow_devtools.markers import requires_pkg from modflow_devtools.misc import has_pkg @@ -118,31 +119,22 @@ def test_hydmodfile_read(hydmod_model_path): len(data.dtype.names) == nitems + 1 ), f"data column length is not {len(nitems + 1)}" - if has_pkg("pandas"): - import pandas as pd - - for idx in range(ntimes): - df = h.get_dataframe(idx=idx, timeunit="S") - assert isinstance(df, pd.DataFrame), "A DataFrame was not returned" - assert df.shape == (1, 9), "data shape is not (1, 9)" - - for time in times: - df = h.get_dataframe(totim=time, timeunit="S") - assert isinstance(df, pd.DataFrame), "A DataFrame was not returned" - assert df.shape == (1, 9), "data shape is not (1, 9)" + for idx in range(ntimes): + df = h.get_dataframe(idx=idx, timeunit="S") + assert isinstance(df, pd.DataFrame), "A DataFrame was not returned" + assert df.shape == (1, 9), "data shape is not (1, 9)" - df = h.get_dataframe(timeunit="S") + for time in times: + df = h.get_dataframe(totim=time, timeunit="S") assert isinstance(df, pd.DataFrame), "A DataFrame was not returned" - assert df.shape == (101, 9), "data shape is not (101, 9)" - else: - print("pandas not available...") - pass + assert df.shape == (1, 9), "data shape is not (1, 9)" + df = h.get_dataframe(timeunit="S") + assert isinstance(df, pd.DataFrame), "A DataFrame was not returned" + assert df.shape == (101, 9), "data shape is not (101, 9)" -@requires_pkg("pandas") -def test_mf6obsfile_read(mf6_obs_model_path): - import pandas as pd +def test_mf6obsfile_read(mf6_obs_model_path): txt = "binary mf6 obs" files = ["maw_obs.gitbin", "maw_obs.gitcsv"] binfile = [True, False] diff --git a/autotest/test_listbudget.py b/autotest/test_listbudget.py index ffc4a094a3..7f39e8d089 100644 --- a/autotest/test_listbudget.py +++ b/autotest/test_listbudget.py @@ -2,6 +2,7 @@ import warnings import numpy as np +import pandas as pd import pytest from modflow_devtools.markers import requires_pkg from modflow_devtools.misc import has_pkg @@ -54,14 +55,9 @@ def test_mflistfile(example_data_path): cum = mflist.get_cumulative(names="PERCENT_DISCREPANCY") assert isinstance(cum, np.ndarray) - if not has_pkg("pandas"): - return - - import pandas - df_flx, df_vol = mflist.get_dataframes(start_datetime=None) - assert isinstance(df_flx, pandas.DataFrame) - assert isinstance(df_vol, pandas.DataFrame) + assert isinstance(df_flx, pd.DataFrame) + assert isinstance(df_vol, pd.DataFrame) # test get runtime runtime = mflist.get_model_runtime(units="hours") @@ -115,10 +111,7 @@ def test_mflist_reducedpumping_fail(example_data_path): mflist.get_reduced_pumping() -@requires_pkg("pandas") def test_mtlist(example_data_path): - import pandas as pd - mt_dir = example_data_path / "mt3d_test" mt = MtListBudget(mt_dir / "mcomp.list") df_gw, df_sw = mt.parse(forgive=False, diff=False, start_datetime=None) diff --git a/autotest/test_mnw.py b/autotest/test_mnw.py index d2b4c0ea04..266c55a14b 100644 --- a/autotest/test_mnw.py +++ b/autotest/test_mnw.py @@ -2,6 +2,7 @@ import shutil import numpy as np +import pandas as pd import pytest from modflow_devtools.markers import requires_pkg @@ -293,13 +294,11 @@ def test_make_package(function_tmpdir): ) -@requires_pkg("pandas") def test_mnw2_create_file(function_tmpdir): """ Test for issue #556, Mnw2 crashed if wells have multiple node lengths """ - import pandas as pd mf = Modflow("test_mfmnw2", exe_name="mf2005") ws = function_tmpdir diff --git a/autotest/test_mp5.py b/autotest/test_mp5.py index 9101d0809d..7c4515444b 100644 --- a/autotest/test_mp5.py +++ b/autotest/test_mp5.py @@ -1,6 +1,7 @@ import os import numpy as np +import pandas as pd from autotest.test_mp6 import eval_timeseries from matplotlib import pyplot as plt from modflow_devtools.markers import requires_pkg @@ -10,7 +11,6 @@ from flopy.utils import EndpointFile, PathlineFile -@requires_pkg("pandas") def test_mp5_load(function_tmpdir, example_data_path): # load the base freyberg model freyberg_ws = example_data_path / "freyberg" @@ -68,7 +68,6 @@ def test_mp5_load(function_tmpdir, example_data_path): plt.close() -@requires_pkg("pandas") def test_mp5_timeseries_load(example_data_path): pth = str(example_data_path / "mp5") files = [ diff --git a/autotest/test_mp6.py b/autotest/test_mp6.py index 4637876e01..6f37de0369 100644 --- a/autotest/test_mp6.py +++ b/autotest/test_mp6.py @@ -3,6 +3,7 @@ import matplotlib.pyplot as plt import numpy as np +import pandas as pd import pytest from autotest.conftest import get_example_data_path from autotest.test_mp6_cases import Mp6Cases1, Mp6Cases2 @@ -110,12 +111,8 @@ def test_mpsim(function_tmpdir, mp6_test_path): ) mp.write_input() - use_pandas_combs = [False] # test StartingLocationsFile._write_wo_pandas - if has_pkg("pandas"): - # test StartingLocationsFile._write_particle_data_with_pandas - use_pandas_combs.append(True) - - for use_pandas in use_pandas_combs: + # test StartingLocationsFile._write_wo_pandas + for use_pandas in [True, False]: sim = Modpath6Sim(model=mp) # starting locations file stl = StartingLocationsFile(model=mp, use_pandas=use_pandas) @@ -135,7 +132,7 @@ def test_mpsim(function_tmpdir, mp6_test_path): assert stllines[6].strip().split()[-1] == "p2" -@requires_pkg("pandas", "shapefile", "shapely") +@requires_pkg("shapefile", "shapely") def test_get_destination_data(function_tmpdir, mp6_test_path): copy_modpath_files(mp6_test_path, function_tmpdir, "EXAMPLE.") copy_modpath_files(mp6_test_path, function_tmpdir, "EXAMPLE-3.") @@ -307,7 +304,6 @@ def test_get_destination_data(function_tmpdir, mp6_test_path): pthobj.write_shapefile(shpname=fpth, direction="ending", mg=mg4) -@requires_pkg("pandas") def test_loadtxt(function_tmpdir, mp6_test_path): copy_modpath_files(mp6_test_path, function_tmpdir, "EXAMPLE-3.") @@ -324,7 +320,6 @@ def test_loadtxt(function_tmpdir, mp6_test_path): @requires_exe("mf2005") -@requires_pkg("pandas") def test_modpath(function_tmpdir, example_data_path): pth = example_data_path / "freyberg" mfnam = "freyberg.nam" @@ -480,7 +475,6 @@ def test_modpath(function_tmpdir, example_data_path): plt.close() -@requires_pkg("pandas") def test_mp6_timeseries_load(example_data_path): pth = example_data_path / "mp5" files = [ diff --git a/autotest/test_mp7.py b/autotest/test_mp7.py index b51edf624e..540b6b9c25 100644 --- a/autotest/test_mp7.py +++ b/autotest/test_mp7.py @@ -254,7 +254,6 @@ def test_default_modpath(ex01b_mf6_model): @requires_exe("mf6", "mp7") -@requires_pkg("pandas") def test_faceparticles_is1(ex01b_mf6_model): sim, function_tmpdir = ex01b_mf6_model @@ -441,7 +440,6 @@ def test_facenode_is2a(ex01b_mf6_model): @requires_exe("mf6", "mp7") -@requires_pkg("pandas") def test_cellparticles_is1(ex01b_mf6_model): sim, function_tmpdir = ex01b_mf6_model grid = sim.get_model(ex01b_mf6_model_name).modelgrid @@ -817,7 +815,6 @@ def test_pathline_output(function_tmpdir): assert maxid0 == maxid1, msg -@requires_pkg("pandas") @requires_exe("mf2005", "mf6", "mp7") def test_endpoint_output(function_tmpdir): case_mf2005 = Mp7Cases.mp7_mf2005(function_tmpdir) diff --git a/autotest/test_plot.py b/autotest/test_plot.py index 085eabd693..04f320721a 100644 --- a/autotest/test_plot.py +++ b/autotest/test_plot.py @@ -386,7 +386,6 @@ def modpath_model(function_tmpdir, example_data_path): return ml, mp, sim -@requires_pkg("pandas") @requires_exe("mf2005", "mp6") def test_xc_plot_particle_pathlines(modpath_model): ml, mp, sim = modpath_model @@ -407,7 +406,6 @@ def test_xc_plot_particle_pathlines(modpath_model): assert len(pth._paths) == 6 -@requires_pkg("pandas") @requires_exe("mf2005", "mp6") def test_map_plot_particle_endpoints(modpath_model): ml, mp, sim = modpath_model diff --git a/autotest/test_sfr.py b/autotest/test_sfr.py index 19b09d0fb2..42630e1880 100644 --- a/autotest/test_sfr.py +++ b/autotest/test_sfr.py @@ -373,7 +373,7 @@ def test_const(sfr_data): assert True -@requires_pkg("pandas", "shapefile", "shapely") +@requires_pkg("shapefile", "shapely") def test_export(function_tmpdir, sfr_data): m = Modflow() dis = ModflowDis(m, 1, 10, 10, lenuni=2, itmuni=4) @@ -666,7 +666,6 @@ def test_assign_layers(function_tmpdir): @requires_exe("mf2005") -@requires_pkg("pandas") def test_SfrFile(function_tmpdir, sfr_examples_path, mf2005_model_path): common_names = [ "layer", @@ -693,13 +692,12 @@ def test_SfrFile(function_tmpdir, sfr_examples_path, mf2005_model_path): "gw_head", ], sfrout.names assert sfrout.times == [(0, 0), (49, 1)], sfrout.times - # will be None if pandas is not installed - if sfrout.pd is not None: - df = sfrout.get_dataframe() - assert df.layer.values[0] == 1 - assert df.column.values[0] == 169 - assert df.Cond.values[0] == 74510.0 - assert df.gw_head.values[3] == 1.288e03 + + df = sfrout.get_dataframe() + assert df.layer.values[0] == 1 + assert df.column.values[0] == 169 + assert df.Cond.values[0] == 74510.0 + assert df.gw_head.values[3] == 1.288e03 sfrout = SfrFile(sfr_examples_path / "test1tr.flw") assert sfrout.ncol == 16, sfrout.ncol @@ -737,10 +735,9 @@ def test_SfrFile(function_tmpdir, sfr_examples_path, mf2005_model_path): (49, 1), ] assert sfrout.times == expected_times, sfrout.times - if sfrout.pd is not None: - df = sfrout.get_dataframe() - assert df.gradient.values[-1] == 5.502e-02 - assert df.shape == (1080, 20) + df = sfrout.get_dataframe() + assert df.gradient.values[-1] == 5.502e-02 + assert df.shape == (1080, 20) ml = Modflow.load( "test1tr.nam", model_ws=mf2005_model_path, exe_name="mf2005" diff --git a/autotest/test_str.py b/autotest/test_str.py index b54149137a..85c23333f2 100644 --- a/autotest/test_str.py +++ b/autotest/test_str.py @@ -14,7 +14,6 @@ @requires_exe("mf2005") -@requires_pkg("pandas") def test_str_issue1164(function_tmpdir, example_data_path): mf2005_model_path = example_data_path / "mf2005_test" m = Modflow.load( diff --git a/autotest/test_swr_binaryread.py b/autotest/test_swr_binaryread.py index 228127a12b..02bf20a96b 100644 --- a/autotest/test_swr_binaryread.py +++ b/autotest/test_swr_binaryread.py @@ -1,4 +1,5 @@ # Test SWR binary read functionality +import pandas as pd import pytest from modflow_devtools.misc import has_pkg @@ -446,21 +447,16 @@ def test_swr_binary_obs(swr_test_path, ipos): ), "SwrObs data does not have nobs + 1" # test get_dataframes() - if has_pkg("pandas"): - import pandas as pd - - for idx in range(ntimes): - df = sobj.get_dataframe(idx=idx, timeunit="S") - assert isinstance(df, pd.DataFrame), "A DataFrame was not returned" - assert df.shape == (1, nobs + 1), "data shape is not (1, 10)" - - for time in times: - df = sobj.get_dataframe(totim=time, timeunit="S") - assert isinstance(df, pd.DataFrame), "A DataFrame was not returned" - assert df.shape == (1, nobs + 1), "data shape is not (1, 10)" + for idx in range(ntimes): + df = sobj.get_dataframe(idx=idx, timeunit="S") + assert isinstance(df, pd.DataFrame), "A DataFrame was not returned" + assert df.shape == (1, nobs + 1), "data shape is not (1, 10)" - df = sobj.get_dataframe(timeunit="S") + for time in times: + df = sobj.get_dataframe(totim=time, timeunit="S") assert isinstance(df, pd.DataFrame), "A DataFrame was not returned" - assert df.shape == (336, nobs + 1), "data shape is not (336, 10)" - else: - print("pandas not available...") + assert df.shape == (1, nobs + 1), "data shape is not (1, 10)" + + df = sobj.get_dataframe(timeunit="S") + assert isinstance(df, pd.DataFrame), "A DataFrame was not returned" + assert df.shape == (336, nobs + 1), "data shape is not (336, 10)" diff --git a/autotest/test_util_2d_and_3d.py b/autotest/test_util_2d_and_3d.py index ea262202fb..2080458033 100644 --- a/autotest/test_util_2d_and_3d.py +++ b/autotest/test_util_2d_and_3d.py @@ -1,6 +1,7 @@ import os import numpy as np +import pandas as pd import pytest from modflow_devtools.markers import requires_pkg @@ -439,7 +440,6 @@ def test_append_mflist(function_tmpdir): ml.write_input() -@requires_pkg("pandas") def test_mflist(function_tmpdir, example_data_path): model = Modflow(model_ws=function_tmpdir) dis = ModflowDis(model, 10, 10, 10, 10) @@ -609,13 +609,10 @@ def test_util3d_reset(): ml.bas6.strt = arr -@requires_pkg("pandas") def test_mflist_fromfile(function_tmpdir): """test that when a file is passed to stress period data, the .array attribute will load the file """ - import pandas as pd - wel_data = pd.DataFrame( [(0, 1, 2, -50.0), (0, 5, 5, -50.0)], columns=["k", "i", "j", "flux"] ) diff --git a/autotest/test_zonbud_utility.py b/autotest/test_zonbud_utility.py index 57b3a7d49f..4dd16c70c6 100644 --- a/autotest/test_zonbud_utility.py +++ b/autotest/test_zonbud_utility.py @@ -1,6 +1,7 @@ import os import numpy as np +import pandas as pd import pytest from modflow_devtools.markers import requires_exe, requires_pkg @@ -207,7 +208,6 @@ def test_zonbud_readwrite_zbarray(function_tmpdir): assert np.array_equal(x, z), "Input and output arrays do not match." -@requires_pkg("pandas") def test_dataframes(cbc_f, zon_f): zon = ZoneBudget.read_zone_file(zon_f) cmd = ZoneBudget(cbc_f, zon, totim=1095.0) @@ -233,11 +233,8 @@ def test_get_model_shape(cbc_f, zon_f): ).get_model_shape() -@requires_pkg("pandas") @pytest.mark.parametrize("rtol", [1e-2]) def test_zonbud_active_areas_zone_zero(loadpth, cbc_f, rtol): - import pandas as pd - # Read ZoneBudget executable output and reformat zbud_f = loadpth / "zonef_mlt_active_zone_0.2.csv" zbud = pd.read_csv(zbud_f) @@ -284,10 +281,7 @@ def test_read_zone_file(function_tmpdir): @pytest.mark.mf6 @requires_exe("mf6") -@requires_pkg("pandas") def test_zonebudget_6(function_tmpdir, example_data_path): - import pandas as pd - exe_name = "mf6" zb_exe_name = "zbud6" diff --git a/docs/flopy_method_dependencies.md b/docs/flopy_method_dependencies.md index 28ddaa89f3..72841add05 100644 --- a/docs/flopy_method_dependencies.md +++ b/docs/flopy_method_dependencies.md @@ -11,17 +11,6 @@ Additional dependencies to use optional FloPy helper methods are listed below. | `.interpolate()` in `flopy.utils.reference` `SpatialReference` class | **scipy.interpolate** | | `.interpolate()` in `flopy.mf6.utils.reference` `StructuredSpatialReference` class | **scipy.interpolate** | | `._parse_units_from_proj4()` in `flopy.utils.reference` `SpatialReference` class | **pyproj** | -| `.get_dataframes()` in `flopy.utils.mflistfile` `ListBudget` class | **pandas** >= 0.15.0 | -| `.get_dataframes()` in `flopy.utils.observationfile` `ObsFiles` class | **pandas** >= 0.15.0 | -| `.get_dataframes()` in `flopy.utils.sfroutputfile` `ModflowSfr2` class | **pandas** >= 0.15.0 | -| `.get_dataframes()` in `flopy.utils.util_list` `MfList` class | **pandas** >= 0.15.0 | -| `.get_dataframes()` in `flopy.utils.zonebud` `ZoneBudget` class | **pandas** >= 0.15.0 | -| `.pivot_keyarray()` in `flopy.mf6.utils.arrayutils` `AdvancedPackageUtil` class | **pandas** >= 0.15.0 | -| `._get_vertices()` in `flopy.mf6.utils.binaryfile_utils` `MFOutputRequester` class | **pandas** >= 0.15.0 | -| `.get_dataframe()` in `flopy.mf6.utils.mfobservation` `Observations` class | **pandas** >= 0.15.0 | -| `.df()` in `flopy.modflow.mfsfr2` `SfrFile` class | **pandas** >= 0.15.0 | -| `.time_coverage()` in `flopy.export.metadata` `acc` class - ***used if available*** | **pandas** >= 0.15.0 | -| `.loadtxt()` in `flopy.utils.flopyio` - ***used if available*** | **pandas** >= 0.15.0 | | `.generate_classes()` in `flopy.mf6.utils` | [**modflow-devtools**](https://github.com/MODFLOW-USGS/modflow-devtools) | | `GridIntersect()` in `flopy.utils.gridintersect` | **shapely** | | `GridIntersect().plot_polygon()` in `flopy.utils.gridintersect` | **shapely** and **descartes** | diff --git a/flopy/export/metadata.py b/flopy/export/metadata.py index d3325864fd..f7bc326857 100644 --- a/flopy/export/metadata.py +++ b/flopy/export/metadata.py @@ -1,4 +1,5 @@ import numpy as np +import pandas as pd from ..utils import import_optional_dependency from ..utils.flopy_io import get_url_text @@ -191,7 +192,6 @@ def time_coverage(self): ------- """ - pd = import_optional_dependency("pandas", errors="ignore") l = self.sb["dates"] tc = {} diff --git a/flopy/mf6/utils/binaryfile_utils.py b/flopy/mf6/utils/binaryfile_utils.py index 7ed85ed080..24fc9b8512 100644 --- a/flopy/mf6/utils/binaryfile_utils.py +++ b/flopy/mf6/utils/binaryfile_utils.py @@ -1,6 +1,7 @@ import os import numpy as np +import pandas as pd from ...utils import binaryfile as bf from ...utils import import_optional_dependency @@ -231,11 +232,6 @@ def _get_vertices(mfdict, key): elevations corresponding to a row column location """ - pd = import_optional_dependency( - "pandas", - error_message="MFOutputRequester._get_vertices() requires pandas.", - ) - mname = key[0] cellid = mfdict[(mname, "DISV8", "CELL2D", "cell2d_num")] diff --git a/flopy/mf6/utils/mfobservation.py b/flopy/mf6/utils/mfobservation.py index ab3fe30c3f..09a4c57081 100644 --- a/flopy/mf6/utils/mfobservation.py +++ b/flopy/mf6/utils/mfobservation.py @@ -1,6 +1,7 @@ import csv import numpy as np +import pandas as pd from ...utils import import_optional_dependency @@ -210,10 +211,6 @@ def get_dataframe( pd.DataFrame """ - pd = import_optional_dependency( - "pandas", - error_message="get_dataframe() requires pandas.", - ) data_str = self._reader(self.Obsname) data = self._array_to_dict(data_str) diff --git a/flopy/modflow/mfsfr2.py b/flopy/modflow/mfsfr2.py index cf8355c01d..c2a1f43400 100644 --- a/flopy/modflow/mfsfr2.py +++ b/flopy/modflow/mfsfr2.py @@ -5,6 +5,7 @@ import warnings import numpy as np +import pandas as pd from numpy.lib import recfunctions from ..pakbase import Package @@ -646,7 +647,6 @@ def paths(self): @property def df(self): - pd = import_optional_dependency("pandas") return pd.DataFrame(self.reach_data) def _make_graph(self): @@ -1573,8 +1573,6 @@ def plot_path(self, start_seg=None, end_seg=0, plot_segment_lines=True): """ import matplotlib.pyplot as plt - pd = import_optional_dependency("pandas") - df = self.df m = self.parent mfunits = m.modelgrid.units diff --git a/flopy/modpath/mp6sim.py b/flopy/modpath/mp6sim.py index bf96a56cd1..1900b40c6d 100644 --- a/flopy/modpath/mp6sim.py +++ b/flopy/modpath/mp6sim.py @@ -8,6 +8,7 @@ """ import numpy as np +import pandas as pd from ..pakbase import Package from ..utils import Util3d, import_optional_dependency @@ -411,8 +412,8 @@ class StartingLocationsFile(Package): Input style described in MODPATH6 manual (currently only input style 1 is supported) extension : string Filename extension (default is 'loc') - use_pandas: bool, default False - If True and pandas is available use pandas to write the particle locations >2x speed + use_pandas: bool, default True + If True use pandas to write the particle locations >2x speed """ def __init__( @@ -421,7 +422,7 @@ def __init__( inputstyle=1, extension="loc", verbose=False, - use_pandas=False, + use_pandas=True, ): super().__init__(model, extension, "LOC", 33) @@ -510,10 +511,6 @@ def _write_particle_data_with_pandas(self, data, float_format): :param save_group_mapper bool, if true, save a groupnumber to group name mapper as well. :return: """ - pd = import_optional_dependency( - "pandas", - error_message="specify `use_pandas=False` to use slower methods without pandas", - ) # convert float format string to pandas float format float_format = ( float_format.replace("{", "").replace("}", "").replace(":", "%") diff --git a/flopy/utils/flopy_io.py b/flopy/utils/flopy_io.py index cfd3190d67..17078dabb7 100644 --- a/flopy/utils/flopy_io.py +++ b/flopy/utils/flopy_io.py @@ -9,6 +9,7 @@ from typing import Union import numpy as np +import pandas as pd def _fmt_string(array, float_format="{}"): @@ -326,7 +327,7 @@ def loadtxt( file, delimiter=" ", dtype=None, skiprows=0, use_pandas=True, **kwargs ): """ - Use pandas if it is available to load a text file + Use pandas to load a text file (significantly faster than n.loadtxt or genfromtxt see https://stackoverflow.com/q/18259393/) @@ -352,15 +353,12 @@ def loadtxt( """ from ..utils import import_optional_dependency - # test if pandas should be used, if available if use_pandas: - pd = import_optional_dependency("pandas") if delimiter.isspace(): kwargs["delim_whitespace"] = True if isinstance(dtype, np.dtype) and "names" not in kwargs: kwargs["names"] = dtype.names - # if use_pandas and pd then use pandas if use_pandas: df = pd.read_csv(file, dtype=dtype, skiprows=skiprows, **kwargs) return df.to_records(index=False) diff --git a/flopy/utils/mflistfile.py b/flopy/utils/mflistfile.py index b88850d078..6ac362cef1 100644 --- a/flopy/utils/mflistfile.py +++ b/flopy/utils/mflistfile.py @@ -10,6 +10,7 @@ import re import numpy as np +import pandas as pd from ..utils import import_optional_dependency from ..utils.flopy_io import get_ts_sp @@ -494,11 +495,6 @@ def get_dataframes(self, start_datetime="1-1-1970", diff=False): """ - pd = import_optional_dependency( - "pandas", - error_message="ListBudget.get_dataframes() requires pandas.", - ) - if not self._isvalid: return None totim = self.get_times() diff --git a/flopy/utils/mtlistfile.py b/flopy/utils/mtlistfile.py index 5214859795..9f70c72b41 100644 --- a/flopy/utils/mtlistfile.py +++ b/flopy/utils/mtlistfile.py @@ -6,6 +6,7 @@ import warnings import numpy as np +import pandas as pd from ..utils import import_optional_dependency @@ -76,11 +77,6 @@ def parse( (optionally) surface-water mass budget. If the SFT process is not used, df_sw is None. """ - pd = import_optional_dependency( - "pandas", - error_message="MtListBudget.parse() requires pandas.", - ) - self.gw_data = {} self.sw_data = {} self.lcount = 0 @@ -182,11 +178,6 @@ def parse( return df_gw, df_sw def _diff(self, df): - pd = import_optional_dependency( - "pandas", - error_message="MtListBudget._diff() requires pandas.", - ) - out_cols = [ c for c in df.columns if "_out" in c and not c.startswith("net_") ] diff --git a/flopy/utils/observationfile.py b/flopy/utils/observationfile.py index d385320daf..5f6f37fdfb 100644 --- a/flopy/utils/observationfile.py +++ b/flopy/utils/observationfile.py @@ -1,6 +1,7 @@ import io import numpy as np +import pandas as pd from ..utils import import_optional_dependency from ..utils.flopy_io import get_ts_sp @@ -179,11 +180,6 @@ def get_dataframe( from ..utils.utils_def import totim_to_datetime - pd = import_optional_dependency( - "pandas", - error_message="ObsFiles.get_dataframe() requires pandas.", - ) - i0 = 0 i1 = self.data.shape[0] if totim is not None: diff --git a/flopy/utils/sfroutputfile.py b/flopy/utils/sfroutputfile.py index 5e8d18ea72..797bcfdad6 100644 --- a/flopy/utils/sfroutputfile.py +++ b/flopy/utils/sfroutputfile.py @@ -1,4 +1,5 @@ import numpy as np +import pandas as pd from ..utils import import_optional_dependency @@ -52,8 +53,6 @@ def __init__(self, filename, geometries=None, verbose=False): Class constructor. """ - self.pd = import_optional_dependency("pandas") - # get the number of rows to skip at top, and the number of data columns self.filename = filename evaluated_format = False @@ -172,13 +171,10 @@ def get_dataframe(self): "skiprows": self.sr, "low_memory": False, } - try: # since pandas 1.3.0 - df = self.pd.read_csv(**kwargs, on_bad_lines="skip") - except TypeError: # before pandas 1.3.0 - df = self.pd.read_csv(**kwargs, error_bad_lines=False) + df = pd.read_csv(**kwargs, on_bad_lines="skip") # drop text between stress periods; convert to numeric - df["layer"] = self.pd.to_numeric(df.layer, errors="coerce") + df["layer"] = pd.to_numeric(df.layer, errors="coerce") df.dropna(axis=0, inplace=True) # convert to proper dtypes @@ -247,7 +243,7 @@ def get_results(self, segment, reach): results = self._get_result(segment, reach) except: locsr = list(zip(segment, reach)) - results = self.pd.DataFrame() + results = pd.DataFrame() for s, r in locsr: srresults = self._get_result(s, r) if len(srresults) > 0: diff --git a/flopy/utils/util_list.py b/flopy/utils/util_list.py index 7b3b1b695a..2aa83be270 100644 --- a/flopy/utils/util_list.py +++ b/flopy/utils/util_list.py @@ -11,6 +11,7 @@ import warnings import numpy as np +import pandas as pd from ..datbase import DataInterface, DataListInterface, DataType from ..utils import import_optional_dependency @@ -438,15 +439,7 @@ def get_dataframe(self, squeeze=False): stress periods where at least one cells is different, otherwise it is equal to the number of keys in MfList.data. - Notes - ----- - Requires pandas. - """ - pd = import_optional_dependency( - "pandas", - error_message="MfList.get_dataframe() requires pandas.", - ) # make a dataframe of all data for all stress periods names = ["per", "k", "i", "j"] diff --git a/flopy/utils/utl_import.py b/flopy/utils/utl_import.py index 321215c7a5..166102814c 100644 --- a/flopy/utils/utl_import.py +++ b/flopy/utils/utl_import.py @@ -49,7 +49,6 @@ VERSIONS = { "shapefile": "2.0.0", "dateutil": "2.4.0", - "pandas": "0.15.0", } # A mapping from import name to package name (on PyPI) for packages where diff --git a/flopy/utils/zonbud.py b/flopy/utils/zonbud.py index 053b8e3df3..5740669c11 100644 --- a/flopy/utils/zonbud.py +++ b/flopy/utils/zonbud.py @@ -4,6 +4,7 @@ from typing import Union import numpy as np +import pandas as pd from . import import_optional_dependency from .utils_def import totim_to_datetime @@ -2378,10 +2379,6 @@ def _recarray_to_dataframe( pd.DataFrame """ - pd = import_optional_dependency( - "pandas", - error_message="ZoneBudget.get_dataframes() requires pandas.", - ) valid_index_keys = ["totim", "kstpkper"] s = f'index_key "{index_key}" is not valid.' @@ -3000,10 +2997,6 @@ def _volumetric_flux(recarray, modeltime, extrapolate_kper=False): pd.DataFrame """ - pd = import_optional_dependency( - "pandas", - error_message="ZoneBudget._volumetric_flux() requires pandas.", - ) nper = len(modeltime.nstp) volumetric_data = {} diff --git a/pyproject.toml b/pyproject.toml index 8eb30c8ef5..dac35baaed 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -30,6 +30,7 @@ requires-python = ">=3.8" dependencies = [ "numpy >=1.15.0", "matplotlib >=1.4.0", + "pandas >=2.0.0" ] dynamic = ["version", "readme"] @@ -65,7 +66,6 @@ optional = [ "geojson", "imageio", "netcdf4", - "pandas", "pymetis ; platform_system != 'Windows'", "pyproj", "pyshp", From 47e5e359037518f29d14469531db68999d308a0f Mon Sep 17 00:00:00 2001 From: Mike Taves Date: Wed, 2 Aug 2023 04:11:11 +1200 Subject: [PATCH 018/101] refactor(expired deprecation): raise AttributeError with Grid.thick and Grid.saturated_thick (#1884) --- flopy/discretization/grid.py | 44 +++++++----------------------------- 1 file changed, 8 insertions(+), 36 deletions(-) diff --git a/flopy/discretization/grid.py b/flopy/discretization/grid.py index cfa72af05e..484c67247c 100644 --- a/flopy/discretization/grid.py +++ b/flopy/discretization/grid.py @@ -442,20 +442,11 @@ def cell_thickness(self): @property def thick(self): - """ - DEPRECATED method. thick will be removed in version 3.3.9 - - Get the cell thickness for a structured, vertex, or unstructured grid. - - Returns - ------- - thick : calculated thickness - """ - warnings.warn( - "thick has been replaced with cell_thickness and will be removed in version 3.3.9,", - DeprecationWarning, + """Raises AttributeError, use :meth:`cell_thickness`.""" + # DEPRECATED since version 3.4.0 + raise AttributeError( + "'thick' has been removed; use 'cell_thickness()'" ) - return self.cell_thickness def saturated_thickness(self, array, mask=None): """ @@ -495,30 +486,11 @@ def saturated_thickness(self, array, mask=None): return thickness def saturated_thick(self, array, mask=None): - """ - DEPRECATED method. saturated_thick will be removed in version 3.3.9 - - Get the saturated thickness for a structured, vertex, or unstructured - grid. If the optional array is passed then thickness is returned - relative to array values (saturated thickness). Returned values - ranges from zero to cell thickness if optional array is passed. - - Parameters - ---------- - array : ndarray - array of elevations that will be used to adjust the cell thickness - mask: float, list, tuple, ndarray - array values to replace with a nan value. - - Returns - ------- - thick : calculated saturated thickness - """ - warnings.warn( - "saturated_thick has been replaced with saturated_thickness and will be removed in version 3.3.9,", - DeprecationWarning, + """Raises AttributeError, use :meth:`saturated_thickness`.""" + # DEPRECATED since version 3.4.0 + raise AttributeError( + "'saturated_thick' has been removed; use 'saturated_thickness()'" ) - return self.saturated_thickness(array, mask) @property def units(self): From 54d6099eb71a7789121bcfde8816b23fa06c258e Mon Sep 17 00:00:00 2001 From: w-bonelli Date: Tue, 1 Aug 2023 12:31:45 -0400 Subject: [PATCH 019/101] refactor(pathline/endpoint plots): support recarray or dataframe (#1888) * refactor/expand tests * tidy docstrings --- autotest/test_plot.py | 77 ++++++++++++++++++++++++++---- flopy/plot/crosssection.py | 7 +++ flopy/plot/map.py | 96 +++++++++++++++++++++----------------- 3 files changed, 130 insertions(+), 50 deletions(-) diff --git a/autotest/test_plot.py b/autotest/test_plot.py index 04f320721a..58e6726727 100644 --- a/autotest/test_plot.py +++ b/autotest/test_plot.py @@ -1,6 +1,7 @@ import os import numpy as np +import pandas as pd import pytest from flaky import flaky from matplotlib import pyplot as plt @@ -387,9 +388,41 @@ def modpath_model(function_tmpdir, example_data_path): @requires_exe("mf2005", "mp6") -def test_xc_plot_particle_pathlines(modpath_model): +def test_plot_map_view_mp6_plot_pathline(modpath_model): ml, mp, sim = modpath_model + mp.write_input() + mp.run_model(silent=False) + + pthobj = PathlineFile(os.path.join(mp.model_ws, "ex6.mppth")) + well_pathlines = pthobj.get_destination_pathline_data( + dest_cells=[(4, 12, 12)] + ) + def test_plot(pl): + mx = PlotMapView(model=ml) + mx.plot_grid() + mx.plot_bc("WEL", kper=2, color="blue") + pth = mx.plot_pathline(pl, colors="red") + # plt.show() + assert isinstance(pth, LineCollection) + assert len(pth._paths) == 114 + + # support pathlines as list of recarrays + test_plot(well_pathlines) + + # support pathlines as list of dataframes + test_plot([pd.DataFrame(pl) for pl in well_pathlines]) + + # support pathlines as single recarray + test_plot(np.concatenate(well_pathlines)) + + # support pathlines as single dataframe + test_plot(pd.DataFrame(np.concatenate(well_pathlines))) + + +@requires_exe("mf2005", "mp6") +def test_plot_cross_section_mp6_plot_pathline(modpath_model): + ml, mp, sim = modpath_model mp.write_input() mp.run_model(silent=False) @@ -398,16 +431,28 @@ def test_xc_plot_particle_pathlines(modpath_model): dest_cells=[(4, 12, 12)] ) - mx = PlotCrossSection(model=ml, line={"row": 4}) - mx.plot_bc("WEL", kper=2, color="blue") - pth = mx.plot_pathline(well_pathlines, method="cell", colors="red") + def test_plot(pl): + mx = PlotCrossSection(model=ml, line={"row": 4}) + mx.plot_bc("WEL", kper=2, color="blue") + pth = mx.plot_pathline(pl, method="cell", colors="red") + assert isinstance(pth, LineCollection) + assert len(pth._paths) == 6 + + # support pathlines as list of recarrays + test_plot(well_pathlines) + + # support pathlines as list of dataframes + test_plot([pd.DataFrame(pl) for pl in well_pathlines]) + + # support pathlines as single recarray + test_plot(np.concatenate(well_pathlines)) - assert isinstance(pth, LineCollection) - assert len(pth._paths) == 6 + # support pathlines as single dataframe + test_plot(pd.DataFrame(np.concatenate(well_pathlines))) @requires_exe("mf2005", "mp6") -def test_map_plot_particle_endpoints(modpath_model): +def test_plot_map_view_mp6_endpoint(modpath_model): ml, mp, sim = modpath_model mp.write_input() mp.run_model(silent=False) @@ -415,7 +460,23 @@ def test_map_plot_particle_endpoints(modpath_model): pthobj = EndpointFile(os.path.join(mp.model_ws, "ex6.mpend")) endpts = pthobj.get_alldata() - # color kwarg as scalar + # support endpoints as recarray + assert isinstance(endpts, np.recarray) + mv = PlotMapView(model=ml) + mv.plot_bc("WEL", kper=2, color="blue") + ep = mv.plot_endpoint(endpts, direction="ending") + # plt.show() + assert isinstance(ep, PathCollection) + + # support endpoints as dataframe + mv = PlotMapView(model=ml) + mv.plot_bc("WEL", kper=2, color="blue") + ep = mv.plot_endpoint(pd.DataFrame(endpts), direction="ending") + # plt.show() + assert isinstance(ep, PathCollection) + + # test various possibilities for endpoint color configuration. + # first, color kwarg as scalar mv = PlotMapView(model=ml) mv.plot_bc("WEL", kper=2, color="blue") ep = mv.plot_endpoint(endpts, direction="ending", color="red") diff --git a/flopy/plot/crosssection.py b/flopy/plot/crosssection.py index 36f0dfeb61..1e50ee69e8 100644 --- a/flopy/plot/crosssection.py +++ b/flopy/plot/crosssection.py @@ -4,6 +4,7 @@ import matplotlib.colors import matplotlib.pyplot as plt import numpy as np +import pandas as pd from matplotlib.patches import Polygon from ..utils import geometry, import_optional_dependency @@ -1098,6 +1099,12 @@ def plot_pathline( else: pl = [pl] + # make sure each element in pl is a recarray + pl = [ + p.to_records(index=False) if isinstance(p, pd.DataFrame) else p + for p in pl + ] + marker = kwargs.pop("marker", None) markersize = kwargs.pop("markersize", None) markersize = kwargs.pop("ms", markersize) diff --git a/flopy/plot/map.py b/flopy/plot/map.py index f01ffda8bb..98ecdd44d8 100644 --- a/flopy/plot/map.py +++ b/flopy/plot/map.py @@ -3,6 +3,7 @@ import matplotlib.colors import matplotlib.pyplot as plt import numpy as np +import pandas as pd from matplotlib.collections import LineCollection, PathCollection from matplotlib.path import Path @@ -695,33 +696,35 @@ def plot_vector( def plot_pathline(self, pl, travel_time=None, **kwargs): """ - Plot the MODPATH pathlines. + Plot MODPATH pathlines. Parameters ---------- - pl : list of rec arrays or a single rec array - rec array or list of rec arrays is data returned from - modpathfile PathlineFile get_data() or get_alldata() - methods. Data in rec array is 'x', 'y', 'z', 'time', - 'k', and 'particleid'. + pl : list of recarrays or dataframes, or a single recarray or dataframe + Particle pathline data. If a list of recarrays or dataframes, + each must contain the path of only a single particle. If just + one recarray or dataframe, it should contain the paths of all + particles. Pathline data returned from PathlineFile.get_data() + or get_alldata() can be passed directly as this argument. Data + columns should be 'x', 'y', 'z', 'time', 'k', and 'particleid' + at minimum. Additional columns are ignored. The 'particleid' + column must be unique to each particle path. travel_time : float or str - travel_time is a travel time selection for the displayed - pathlines. If a float is passed then pathlines with times - less than or equal to the passed time are plotted. If a - string is passed a variety logical constraints can be added - in front of a time value to select pathlines for a select - period of time. Valid logical constraints are <=, <, ==, >=, and - >. For example, to select all pathlines less than 10000 days - travel_time='< 10000' would be passed to plot_pathline. - (default is None) - kwargs : layer, ax, colors. The remaining kwargs are passed - into the LineCollection constructor. If layer='all', - pathlines are output for all layers + Travel time selection. If a float, then pathlines with total + time less than or equal to the given value are plotted. If a + string, the value must be a comparison operator, then a time + value. Valid operators are <=, <, ==, >=, and >. For example, + to filter pathlines with less than 10000 units of total time + traveled, use '< 10000'. (Default is None.) + kwargs : dict + Explicitly supported kwargs are layer, ax, colors. + Any remaining kwargs are passed into the LineCollection + constructor. If layer='all', pathlines are shown for all layers. Returns ------- lc : matplotlib.collections.LineCollection - + The pathlines added to the plot. """ from matplotlib.collections import LineCollection @@ -734,6 +737,11 @@ def plot_pathline(self, pl, travel_time=None, **kwargs): else: pl = [pl] + pl = [ + p.to_records(index=False) if isinstance(p, pd.DataFrame) else p + for p in pl + ] + if "layer" in kwargs: kon = kwargs.pop("layer") if isinstance(kon, bytes): @@ -809,32 +817,35 @@ def plot_pathline(self, pl, travel_time=None, **kwargs): def plot_timeseries(self, ts, travel_time=None, **kwargs): """ - Plot the MODPATH timeseries. + Plot MODPATH timeseries. Parameters ---------- - ts : list of rec arrays or a single rec array - rec array or list of rec arrays is data returned from - modpathfile TimeseriesFile get_data() or get_alldata() - methods. Data in rec array is 'x', 'y', 'z', 'time', - 'k', and 'particleid'. + ts : list of recarrays or dataframes, or a single recarray or dataframe + Particle timeseries data. If a list of recarrays or dataframes, + each must contain the path of only a single particle. If just + one recarray or dataframe, it should contain the paths of all + particles. Timeseries data returned from TimeseriesFile.get_data() + or get_alldata() can be passed directly as this argument. Data + columns should be 'x', 'y', 'z', 'time', 'k', and 'particleid' + at minimum. Additional columns are ignored. The 'particleid' + column must be unique to each particle path. travel_time : float or str - travel_time is a travel time selection for the displayed - pathlines. If a float is passed then pathlines with times - less than or equal to the passed time are plotted. If a - string is passed a variety logical constraints can be added - in front of a time value to select pathlines for a select - period of time. Valid logical constraints are <=, <, ==, >=, and - >. For example, to select all pathlines less than 10000 days - travel_time='< 10000' would be passed to plot_pathline. - (default is None) - kwargs : layer, ax, colors. The remaining kwargs are passed - into the LineCollection constructor. If layer='all', - pathlines are output for all layers + Travel time selection. If a float, then pathlines with total + time less than or equal to the given value are plotted. If a + string, the value must be a comparison operator, then a time + value. Valid operators are <=, <, ==, >=, and >. For example, + to filter pathlines with less than 10000 units of total time + traveled, use '< 10000'. (Default is None.) + kwargs : dict + Explicitly supported kwargs are layer, ax, colors. + Any remaining kwargs are passed into the LineCollection + constructor. If layer='all', pathlines are shown for all layers. Returns ------- - lo : list of Line2D objects + lc : matplotlib.collections.LineCollection + The pathlines added to the plot. """ if "color" in kwargs: kwargs["markercolor"] = kwargs["color"] @@ -850,13 +861,13 @@ def plot_endpoint( **kwargs, ): """ - Plot the MODPATH endpoints. + Plot MODPATH endpoints. Parameters ---------- - ep : rec array + ep : recarray or dataframe A numpy recarray with the endpoint particle data from the - MODPATH 6 endpoint file + MODPATH endpoint file direction : str String defining if starting or ending particle locations should be considered. (default is 'ending') @@ -882,7 +893,8 @@ def plot_endpoint( Returns ------- - sp : matplotlib.pyplot.scatter + sp : matplotlib.collections.PathCollection + The PathCollection added to the plot. """ From 70b9a37153bf51be7cc06db513b24950ce122fd2 Mon Sep 17 00:00:00 2001 From: Mike Taves Date: Wed, 2 Aug 2023 05:16:03 +1200 Subject: [PATCH 020/101] refactor(expired deprecation): remove warning for third parameter of Grid.intersect (#1883) --- flopy/discretization/grid.py | 34 ------------------------ flopy/discretization/structuredgrid.py | 6 ----- flopy/discretization/unstructuredgrid.py | 5 ---- flopy/discretization/vertexgrid.py | 5 ---- 4 files changed, 50 deletions(-) diff --git a/flopy/discretization/grid.py b/flopy/discretization/grid.py index 484c67247c..7780213f57 100644 --- a/flopy/discretization/grid.py +++ b/flopy/discretization/grid.py @@ -147,8 +147,6 @@ class Grid: 1D array of x and y coordinates of cell vertices for whole grid (single layer) in C-style (row-major) order (same as np.ravel()) - intersect(x, y, local) - returns the row and column of the grid that the x, y point is in See Also -------- @@ -957,38 +955,6 @@ def intersect(self, x, y, local=False, forgive=False): else: return x, y - def _warn_intersect(self, module, lineno): - """ - Warning for modelgrid intersect() interface change. - - Should be be removed after a couple of releases. Added in 3.3.5 - - Updated in 3.3.6 to raise an error and exit if intersect interface - is called incorrectly. - - Should be removed in flopy 3.3.7 - - Parameters - ---------- - module : str - module name path - lineno : int - line number where warning is called from - - Returns - ------- - None - """ - module = os.path.split(module)[-1] - warning = ( - "The interface 'intersect(self, x, y, local=False, " - "forgive=False)' has been deprecated. Use the " - "intersect(self, x, y, z=None, local=False, " - "forgive=False) interface instead." - ) - - raise UserWarning(warning) - def set_coord_info( self, xoff=None, diff --git a/flopy/discretization/structuredgrid.py b/flopy/discretization/structuredgrid.py index feb9696b05..c7c6246f48 100644 --- a/flopy/discretization/structuredgrid.py +++ b/flopy/discretization/structuredgrid.py @@ -1,5 +1,4 @@ import copy -import inspect import os.path from typing import Union @@ -875,11 +874,6 @@ def intersect(self, x, y, z=None, local=False, forgive=False): The column number """ - if isinstance(z, bool): - # trigger interface change warning - frame_info = inspect.getframeinfo(inspect.currentframe()) - self._warn_intersect(frame_info.filename, frame_info.lineno) - # transform x and y to local coordinates x, y = super().intersect(x, y, local, forgive) diff --git a/flopy/discretization/unstructuredgrid.py b/flopy/discretization/unstructuredgrid.py index 21f52a6c0b..2ebef4f9ff 100644 --- a/flopy/discretization/unstructuredgrid.py +++ b/flopy/discretization/unstructuredgrid.py @@ -1,5 +1,4 @@ import copy -import inspect import os from typing import Union @@ -581,10 +580,6 @@ def intersect(self, x, y, z=None, local=False, forgive=False): The CELL2D number """ - if isinstance(z, bool): - frame_info = inspect.getframeinfo(inspect.currentframe()) - self._warn_intersect(frame_info.filename, frame_info.lineno) - if local: # transform x and y to real-world coordinates x, y = super().get_coords(x, y) diff --git a/flopy/discretization/vertexgrid.py b/flopy/discretization/vertexgrid.py index 9679b4a334..ed516e5710 100644 --- a/flopy/discretization/vertexgrid.py +++ b/flopy/discretization/vertexgrid.py @@ -1,5 +1,4 @@ import copy -import inspect import os import numpy as np @@ -321,10 +320,6 @@ def intersect(self, x, y, z=None, local=False, forgive=False): The CELL2D number """ - if isinstance(z, bool): - frame_info = inspect.getframeinfo(inspect.currentframe()) - self._warn_intersect(frame_info.filename, frame_info.lineno) - if local: # transform x and y to real-world coordinates x, y = super().get_coords(x, y) From 0c5f143ee5ce893d5313c88bd190ed5b0acac68d Mon Sep 17 00:00:00 2001 From: w-bonelli Date: Wed, 2 Aug 2023 12:11:02 -0400 Subject: [PATCH 021/101] docs: add deprecation policy to dev docs and release guide (#1889) * describe deprecation policy loosely following NEP 23 * fix some autotest names per pending deprecations --- DEVELOPER.md | 21 ++++++++++++++++++--- autotest/test_export.py | 4 ++-- docs/make_release.md | 8 ++++++-- 3 files changed, 26 insertions(+), 7 deletions(-) diff --git a/DEVELOPER.md b/DEVELOPER.md index 6230132c9a..ad1d04c76b 100644 --- a/DEVELOPER.md +++ b/DEVELOPER.md @@ -17,11 +17,11 @@ This document describes how to set up a FloPy development environment, run the e - [Manually installing executables](#manually-installing-executables) - [Linux](#linux) - [Mac](#mac) + - [Updating FloPy packages](#updating-flopy-packages) - [Examples](#examples) - [Scripts](#scripts) - [Notebooks](#notebooks) - - [Developing example Notebooks](#developing-example-notebooks) - - [Adding a new notebook to the documentation](#adding-a-new-notebook-to-the-documentation) + - [Developing new notebooks](#developing-new-notebooks) - [Adding a tutorial notebook](#adding-a-tutorial-notebook) - [Adding an example notebook](#adding-an-example-notebook) - [Tests](#tests) @@ -33,6 +33,10 @@ This document describes how to set up a FloPy development environment, run the e - [Performance testing](#performance-testing) - [Benchmarking](#benchmarking) - [Profiling](#profiling) +- [Branching model](#branching-model) +- [Deprecation policy](#deprecation-policy) +- [Miscellaneous](#miscellaneous) + - [Locating the root](#locating-the-root) @@ -358,6 +362,17 @@ By default, `pytest-benchmark` will only print profiling results to `stdout`. If This project follows the [git flow](https://nvie.com/posts/a-successful-git-branching-model/): development occurs on the `develop` branch, while `main` is reserved for the state of the latest release. Development PRs are typically squashed to `develop`, to avoid merge commits. At release time, release branches are merged to `main`, and then `main` is merged back into `develop`. +## Deprecation policy + +This project loosely follows [NEP 23](https://numpy.org/neps/nep-0023-backwards-compatibility.html). Basic deprecation policy includes: + +- Deprecated features should be removed after at least 1 year or 2 non-patch releases. +- `DeprecationWarning` should be used for features scheduled for removal. +- `FutureWarning` should be used for features whose behavior will change in backwards-incompatible ways. +- Deprecation warning messages should include the deprecation version number (the release in which the deprecation message first appears) to permit timely follow-through later. + +See the linked article for more detail. + ## Miscellaneous ### Locating the root @@ -370,4 +385,4 @@ For a script in a subdirectory of the root, for instance, the conventional appro ```Python project_root_path = Path(__file__).parent.parent -``` +``` \ No newline at end of file diff --git a/autotest/test_export.py b/autotest/test_export.py index 9f4eac4d32..379353be18 100644 --- a/autotest/test_export.py +++ b/autotest/test_export.py @@ -1112,7 +1112,7 @@ def test_vtk_transient_array_2d(function_tmpdir, example_data_path): @requires_pkg("vtk") @pytest.mark.slow -def test_vtk_export_packages(function_tmpdir, example_data_path): +def test_vtk_add_packages(function_tmpdir, example_data_path): # test mf 2005 freyberg ws = function_tmpdir mpath = example_data_path / "freyberg_multilayer_transient" @@ -1572,7 +1572,7 @@ def load_iverts(fname, closed=False): @pytest.mark.mf6 @requires_pkg("vtk") -def test_vtk_export_model_without_packages_names(function_tmpdir): +def test_vtk_add_model_without_packages_names(function_tmpdir): from vtkmodules.util.numpy_support import vtk_to_numpy from vtkmodules.vtkIOLegacy import vtkUnstructuredGridReader diff --git a/docs/make_release.md b/docs/make_release.md index c8bcab8889..1825b3d7a3 100644 --- a/docs/make_release.md +++ b/docs/make_release.md @@ -28,11 +28,15 @@ The FloPy release procedure is mostly automated with GitHub Actions in [`release 3. Update the authors in `CITATION.cff` for the Software/Code citation for FloPy, if required. -4. Review deprecations. To search for deprecation warnings with git: `git grep [-[A/B/C]N] ` for N optional extra lines of context. Some terms to search for: +4. If this is a minor or major release, review deprecation warnings (if this is a patch release, skip this step). Correct any deprecation version numbers if necessary — for instance, if the warning was added in the latest development cycle but incorrectly anticipated the forthcoming release number (e.g., if this release was expected to be minor but was promoted to major). If any deprecations are due this release, remove the corresponding features. FloPy loosely follows [NEP 23](https://numpy.org/neps/nep-0023-backwards-compatibility.html) deprecation guidelines: removal is recommended after at least 1 year or 2 non-patch releases. - - deprecated:: + To search for deprecation warnings with git: `git grep [-[A/B/C]N] `, where N is the optional number of extra lines of context and A/B/C selects post-context/pre-context/both, respectively. Some terms to search for: + + - deprecated + - .. deprecated:: - DEPRECATED - DeprecationWarning + - FutureWarning ## Release procedure From 3cc66269b08de837075095efb7f6a68fb19fbf0a Mon Sep 17 00:00:00 2001 From: w-bonelli Date: Wed, 2 Aug 2023 17:44:34 -0400 Subject: [PATCH 022/101] docs: remove sphinx-rtd-theme jquery workaround (#1897) --- .docs/conf.py | 1 - 1 file changed, 1 deletion(-) diff --git a/.docs/conf.py b/.docs/conf.py index 5bdc46f9f6..f502cebefc 100644 --- a/.docs/conf.py +++ b/.docs/conf.py @@ -144,7 +144,6 @@ "nbsphinx", "nbsphinx_link", "recommonmark", - "sphinxcontrib.jquery", # https://github.com/readthedocs/sphinx_rtd_theme/issues/1452 ] # Settings for GitHub actions integration From dd77e72b9676be8834dcf0a08e6f55fd35e2bc29 Mon Sep 17 00:00:00 2001 From: w-bonelli Date: Wed, 2 Aug 2023 18:33:52 -0400 Subject: [PATCH 023/101] refactor(dependencies): constrain sphinx >=4 (#1898) --- pyproject.toml | 1 + 1 file changed, 1 insertion(+) diff --git a/pyproject.toml b/pyproject.toml index dac35baaed..68ae0060cd 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -88,6 +88,7 @@ doc = [ "PyYAML", "recommonmark", "rtds-action", + "sphinx >=4", "sphinx-rtd-theme", ] From 03fa01fe09db45e30fe2faf4c5160622b78f1b24 Mon Sep 17 00:00:00 2001 From: w-bonelli Date: Thu, 3 Aug 2023 08:49:24 -0400 Subject: [PATCH 024/101] refactor(dependencies): constrain sphinx-rtd-theme >=1 (#1900) --- pyproject.toml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pyproject.toml b/pyproject.toml index 68ae0060cd..12750e0a6b 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -89,7 +89,7 @@ doc = [ "recommonmark", "rtds-action", "sphinx >=4", - "sphinx-rtd-theme", + "sphinx-rtd-theme >=1", ] [project.scripts] From 809e624fe77c4d112e482698434e2199345e68c2 Mon Sep 17 00:00:00 2001 From: w-bonelli Date: Thu, 3 Aug 2023 08:52:36 -0400 Subject: [PATCH 025/101] refactor(mf6): remove deprecated features (#1894) * layer kwarg in get_data() in mfdataarray.py, 3.3.5 (June 2022) * get_mvr_file, get_mvt_file, get_gnc_file in mfsimbase.py, overdue since 3.3.6 * get_ims_package in mfsimbase.py (and update usage in mfmodel.py), overdue since 3.4 --- flopy/mf6/data/mfdataarray.py | 9 ---- flopy/mf6/mfmodel.py | 2 +- flopy/mf6/mfsimbase.py | 84 ----------------------------------- 3 files changed, 1 insertion(+), 94 deletions(-) diff --git a/flopy/mf6/data/mfdataarray.py b/flopy/mf6/data/mfdataarray.py index 48981dcb94..f54eec2901 100644 --- a/flopy/mf6/data/mfdataarray.py +++ b/flopy/mf6/data/mfdataarray.py @@ -1815,7 +1815,6 @@ def get_record(self, key=None): def get_data(self, key=None, apply_mult=True, **kwargs): """Returns the data associated with stress period key `key`. - If `layer` is None, returns all data for time `key`. Parameters ---------- @@ -1825,14 +1824,6 @@ def get_data(self, key=None, apply_mult=True, **kwargs): Whether to apply multiplier to data prior to returning it """ - if "layer" in kwargs: - warnings.warn( - "The 'layer' parameter has been deprecated, use 'key' " - "instead.", - category=DeprecationWarning, - ) - key = kwargs["layer"] - if self._data_storage is not None and len(self._data_storage) > 0: if key is None: sim_time = self._data_dimensions.package_dim.model_dim[ diff --git a/flopy/mf6/mfmodel.py b/flopy/mf6/mfmodel.py index dcab7fdea9..a62fcef9eb 100644 --- a/flopy/mf6/mfmodel.py +++ b/flopy/mf6/mfmodel.py @@ -1207,7 +1207,7 @@ def get_ims_package(self): for record in solution_group: for model_name in record[2:]: if model_name == self.name: - return self.simulation.get_ims_package(record[1]) + return self.simulation.get_solution_package(record[1]) return None def get_steadystate_list(self): diff --git a/flopy/mf6/mfsimbase.py b/flopy/mf6/mfsimbase.py index cb9b2d7e5f..1c96c23b46 100644 --- a/flopy/mf6/mfsimbase.py +++ b/flopy/mf6/mfsimbase.py @@ -1773,81 +1773,6 @@ def get_file(self, filename): excpt_str = f'file "{filename}" can not be found.' raise FlopyException(excpt_str) - def get_mvr_file(self, filename): - """ - Get a specified mover file. - - Parameters - ---------- - filename : str - Name of mover file to get - - Returns - -------- - mover package : MFPackage - - """ - warnings.warn( - "get_mvr_file will be deprecated and will be removed in version " - "3.3.6. Use get_file", - PendingDeprecationWarning, - ) - if filename in self._other_files: - return self._other_files[filename] - else: - excpt_str = f'MVR file "{filename}" can not be found.' - raise FlopyException(excpt_str) - - def get_mvt_file(self, filename): - """ - Get a specified mvt file. - - Parameters - ---------- - filename : str - Name of mover transport file to get - - Returns - -------- - mover transport package : MFPackage - - """ - warnings.warn( - "get_mvt_file will be deprecated and will be removed in version " - "3.3.6. Use get_file", - PendingDeprecationWarning, - ) - if filename in self._other_files: - return self._other_files[filename] - else: - excpt_str = f'MVT file "{filename}" can not be found.' - raise FlopyException(excpt_str) - - def get_gnc_file(self, filename): - """ - Get a specified gnc file. - - Parameters - ---------- - filename : str - Name of gnc file to get - - Returns - -------- - gnc package : MFPackage - - """ - warnings.warn( - "get_gnc_file will be deprecated and will be removed in version " - "3.3.6. Use get_file", - PendingDeprecationWarning, - ) - if filename in self._other_files: - return self._other_files[filename] - else: - excpt_str = f'GNC file "{filename}" can not be found.' - raise FlopyException(excpt_str) - def remove_exchange_file(self, package): """ Removes the exchange file "package". This is for internal flopy @@ -2230,15 +2155,6 @@ def register_model(self, model, model_type, model_name, model_namefile): return self.structure.model_struct_objs[model_type] - def get_ims_package(self, key): - warnings.warn( - "get_ims_package() has been deprecated and will be " - "removed in version 3.3.7. Use " - "get_solution_package() instead.", - DeprecationWarning, - ) - return self.get_solution_package(key) - def get_solution_package(self, key): """ Get the solution package with the specified `key`. From cf675c6b892a0d484f91465a4ad6600ffe4d518e Mon Sep 17 00:00:00 2001 From: w-bonelli Date: Thu, 3 Aug 2023 09:18:44 -0400 Subject: [PATCH 026/101] refactor(plotutil): remove deprecated utilities (#1891) - centered_specific_discharge, 3.3.1 (June 2020) - cvfd_to_patch_collection, 3.3.4 (August 2021) - plot_cvfd, 3.3.4 (August 2021) --- flopy/plot/plotutil.py | 330 ----------------------------------------- 1 file changed, 330 deletions(-) diff --git a/flopy/plot/plotutil.py b/flopy/plot/plotutil.py index ad27bf31c8..4076e6685f 100644 --- a/flopy/plot/plotutil.py +++ b/flopy/plot/plotutil.py @@ -1554,98 +1554,6 @@ def saturated_thickness(head, top, botm, laytyp, mask_values=None): return sat_thk - @staticmethod - def centered_specific_discharge(Qx, Qy, Qz, delr, delc, sat_thk): - """ - DEPRECATED. Use postprocessing.get_specific_discharge() instead. - - Using the MODFLOW discharge, calculate the cell centered specific - discharge by dividing by the flow width and then averaging - to the cell center. - - Parameters - ---------- - Qx : numpy.ndarray - MODFLOW 'flow right face' - Qy : numpy.ndarray - MODFLOW 'flow front face'. The sign on this array will be flipped - by this function so that the y axis is positive to north. - Qz : numpy.ndarray - MODFLOW 'flow lower face'. The sign on this array will be - flipped by this function so that the z axis is positive - in the upward direction. - delr : numpy.ndarray - MODFLOW delr array - delc : numpy.ndarray - MODFLOW delc array - sat_thk : numpy.ndarray - Saturated thickness for each cell - - Returns - ------- - (qx, qy, qz) : tuple of numpy.ndarrays - Specific discharge arrays that have been interpolated to cell centers. - - """ - warnings.warn( - "centered_specific_discharge() has been deprecated and will be " - "removed in version 3.3.5. Use " - "postprocessing.get_specific_discharge() instead.", - DeprecationWarning, - ) - - qx = None - qy = None - qz = None - - if Qx is not None: - nlay, nrow, ncol = Qx.shape - qx = np.zeros(Qx.shape, dtype=Qx.dtype) - - for k in range(nlay): - for j in range(ncol - 1): - area = ( - delc[:] - * 0.5 - * (sat_thk[k, :, j] + sat_thk[k, :, j + 1]) - ) - idx = area > 0.0 - qx[k, idx, j] = Qx[k, idx, j] / area[idx] - - qx[:, :, 1:] = 0.5 * (qx[:, :, 0 : ncol - 1] + qx[:, :, 1:ncol]) - qx[:, :, 0] = 0.5 * qx[:, :, 0] - - if Qy is not None: - nlay, nrow, ncol = Qy.shape - qy = np.zeros(Qy.shape, dtype=Qy.dtype) - - for k in range(nlay): - for i in range(nrow - 1): - area = ( - delr[:] - * 0.5 - * (sat_thk[k, i, :] + sat_thk[k, i + 1, :]) - ) - idx = area > 0.0 - qy[k, i, idx] = Qy[k, i, idx] / area[idx] - - qy[:, 1:, :] = 0.5 * (qy[:, 0 : nrow - 1, :] + qy[:, 1:nrow, :]) - qy[:, 0, :] = 0.5 * qy[:, 0, :] - qy = -qy - - if Qz is not None: - qz = np.zeros(Qz.shape, dtype=Qz.dtype) - dr = delr.reshape((1, delr.shape[0])) - dc = delc.reshape((delc.shape[0], 1)) - area = dr * dc - for k in range(nlay): - qz[k, :, :] = Qz[k, :, :] / area[:, :] - qz[1:, :, :] = 0.5 * (qz[0 : nlay - 1, :, :] + qz[1:nlay, :, :]) - qz[0, :, :] = 0.5 * qz[0, :, :] - qz = -qz - - return (qx, qy, qz) - class UnstructuredPlotUtilities: """ @@ -2270,182 +2178,6 @@ def plot_shapefile( return pc -def cvfd_to_patch_collection(verts, iverts): - """ - Create a patch collection from control volume vertices and incidence list - - Parameters - ---------- - verts : ndarray - 2d array of x and y points. - iverts : list of lists - should be of len(ncells) with a list of vertex numbers for each cell - - """ - warnings.warn( - "cvfd_to_patch_collection is deprecated and will be removed in " - "version 3.3.5. Use PlotMapView for plotting", - DeprecationWarning, - ) - - from matplotlib.collections import PatchCollection - from matplotlib.patches import Polygon - - ptchs = [] - for ivertlist in iverts: - points = [] - for iv in ivertlist: - points.append((verts[iv, 0], verts[iv, 1])) - # close the polygon, if necessary - if ivertlist[0] != ivertlist[-1]: - iv = ivertlist[0] - points.append((verts[iv, 0], verts[iv, 1])) - ptchs.append(Polygon(points)) - pc = PatchCollection(ptchs) - return pc - - -def plot_cvfd( - verts, - iverts, - ax=None, - layer=0, - cmap="Dark2", - edgecolor="scaled", - facecolor="scaled", - a=None, - masked_values=None, - **kwargs, -): - """ - Generic function for plotting a control volume finite difference grid of - information. - - Parameters - ---------- - verts : ndarray - 2d array of x and y points. - iverts : list of lists - should be of len(ncells) with a list of vertex number for each cell - ax : matplotlib.pylot axis - matplotlib.pyplot axis instance. Default is None - layer : int - layer to extract. Used in combination to the optional ncpl - parameter. Default is 0 - cmap : string - Name of colormap to use for polygon shading (default is 'Dark2') - edgecolor : string - Color name. (Default is 'scaled' to scale the edge colors.) - facecolor : string - Color name. (Default is 'scaled' to scale the face colors.) - a : numpy.ndarray - Array to plot. - masked_values : iterable of floats, ints - Values to mask. - kwargs : dictionary - Keyword arguments that are passed to PatchCollection.set(``**kwargs``). - Some common kwargs would be 'linewidths', 'linestyles', 'alpha', etc. - - Returns - ------- - pc : matplotlib.collections.PatchCollection - - Examples - -------- - - """ - warnings.warn( - "plot_cvfd is deprecated and will be removed in version 3.3.5. " - "Use PlotMapView for plotting", - DeprecationWarning, - ) - - if "vmin" in kwargs: - vmin = kwargs.pop("vmin") - else: - vmin = None - - if "vmax" in kwargs: - vmax = kwargs.pop("vmax") - else: - vmax = None - - if "ncpl" in kwargs: - nlay = layer + 1 - ncpl = kwargs.pop("ncpl") - if isinstance(ncpl, int): - i = int(ncpl) - ncpl = np.ones((nlay), dtype=int) * i - elif isinstance(ncpl, list) or isinstance(ncpl, tuple): - ncpl = np.array(ncpl) - i0 = 0 - i1 = 0 - for k in range(nlay): - i0 = i1 - i1 = i0 + ncpl[k] - # retain iverts in selected layer - iverts = iverts[i0:i1] - # retain vertices in selected layer - tverts = [] - for iv in iverts: - for iloc in iv: - tverts.append((verts[iloc, 0], verts[iloc, 1])) - verts = np.array(tverts) - # calculate offset for starting vertex in layer based on - # global vertex numbers - iadj = iverts[0][0] - # reset iverts to relative vertices in selected layer - tiverts = [] - for iv in iverts: - i = [] - for t in iv: - i.append(t - iadj) - tiverts.append(i) - iverts = tiverts - else: - i0 = 0 - i1 = len(iverts) - - # get current axis - if ax is None: - ax = plt.gca() - cm = plt.get_cmap(cmap) - - pc = cvfd_to_patch_collection(verts, iverts) - pc.set(**kwargs) - - # set colors - if a is None: - nshp = len(pc.get_paths()) - cccol = cm(1.0 * np.arange(nshp) / nshp) - if facecolor == "scaled": - pc.set_facecolor(cccol) - else: - pc.set_facecolor(facecolor) - if edgecolor == "scaled": - pc.set_edgecolor(cccol) - else: - pc.set_edgecolor(edgecolor) - else: - pc.set_cmap(cm) - if masked_values is not None: - for mval in masked_values: - a = np.ma.masked_equal(a, mval) - - # add NaN values to mask - a = np.ma.masked_where(np.isnan(a), a) - - if edgecolor == "scaled": - pc.set_edgecolor("none") - else: - pc.set_edgecolor(edgecolor) - pc.set_array(a[i0:i1]) - pc.set_clim(vmin=vmin, vmax=vmax) - # add the patch collection to the axis - ax.add_collection(pc) - return pc - - def _set_coord_info(mg, xul, yul, xll, yll, rotation): """ @@ -2484,68 +2216,6 @@ def _set_coord_info(mg, xul, yul, xll, yll, rotation): return mg -def _depreciated_dis_handler(modelgrid, dis): - """ - PlotMapView handler for the deprecated dis parameter - which adds top and botm information to the modelgrid - - Parameter - --------- - modelgrid : fp.discretization.Grid object - - dis : fp.modflow.ModflowDis object - - Returns - ------- - modelgrid : fp.discretization.Grid - - """ - # creates a new modelgrid instance with the dis information - from ..discretization import StructuredGrid, UnstructuredGrid, VertexGrid - - warnings.warn( - "the dis parameter has been depreciated and will be removed in " - "version 3.3.5.", - PendingDeprecationWarning, - ) - if modelgrid.grid_type == "vertex": - modelgrid = VertexGrid( - vertices=modelgrid.vertices, - cell2d=modelgrid.cell2d, - top=dis.top.array, - botm=dis.botm.array, - idomain=modelgrid.idomain, - xoff=modelgrid.xoffset, - yoff=modelgrid.yoffset, - angrot=modelgrid.angrot, - ) - if modelgrid.grid_type == "unstructured": - modelgrid = UnstructuredGrid( - vertices=modelgrid._vertices, - iverts=modelgrid._iverts, - xcenters=modelgrid._xc, - ycenters=modelgrid._yc, - top=dis.top.array, - botm=dis.botm.array, - idomain=modelgrid.idomain, - xoff=modelgrid.xoffset, - yoff=modelgrid.yoffset, - angrot=modelgrid.angrot, - ) - else: - modelgrid = StructuredGrid( - delc=dis.delc.array, - delr=dis.delr.array, - top=dis.top.array, - botm=dis.botm.array, - idomain=modelgrid.idomain, - xoff=modelgrid.xoffset, - yoff=modelgrid.yoffset, - angrot=modelgrid.angrot, - ) - return modelgrid - - def advanced_package_bc_helper(pkg, modelgrid, kper): """ Helper function for plotting boundary conditions from "advanced" packages From 6f2da88f9b2d53f94f9c003ddfde3daf6952e3ba Mon Sep 17 00:00:00 2001 From: w-bonelli Date: Thu, 3 Aug 2023 09:19:10 -0400 Subject: [PATCH 027/101] refactor(shapefile_utils): remove deprecated SpatialReference usages (#1892) --- flopy/export/shapefile_utils.py | 23 ++--------------------- 1 file changed, 2 insertions(+), 21 deletions(-) diff --git a/flopy/export/shapefile_utils.py b/flopy/export/shapefile_utils.py index 86cb533320..87184b74fd 100644 --- a/flopy/export/shapefile_utils.py +++ b/flopy/export/shapefile_utils.py @@ -18,9 +18,6 @@ from ..utils import Util3d, flopy_io, import_optional_dependency from ..utils.crs import get_crs -# web address of spatial reference dot org -srefhttp = "https://spatialreference.org" - def write_gridlines_shapefile(filename: Union[str, os.PathLike], mg): """ @@ -41,15 +38,7 @@ def write_gridlines_shapefile(filename: Union[str, os.PathLike], mg): shapefile = import_optional_dependency("shapefile") wr = shapefile.Writer(str(filename), shapeType=shapefile.POLYLINE) wr.field("number", "N", 18, 0) - if mg.__class__.__name__ == "SpatialReference": - grid_lines = mg.get_grid_lines() - warnings.warn( - "SpatialReference has been deprecated. Use StructuredGrid" - " instead.", - category=DeprecationWarning, - ) - else: - grid_lines = mg.grid_lines + grid_lines = mg.grid_lines for i, line in enumerate(grid_lines): wr.line([line]) wr.record(i) @@ -112,15 +101,7 @@ def write_grid_shapefile( w = shapefile.Writer(str(path), shapeType=shapefile.POLYGON) w.autoBalance = 1 - if mg.__class__.__name__ == "SpatialReference": - verts = copy.deepcopy(mg.vertices) - warnings.warn( - "SpatialReference has been deprecated. Use StructuredGrid" - " instead.", - category=DeprecationWarning, - ) - mg.grid_type = "structured" - elif mg.grid_type == "structured": + if mg.grid_type == "structured": verts = [ mg.get_cell_vertices(i, j) for i in range(mg.nrow) From ca838b17e89114be8952d07e85d88b128d2ec2c3 Mon Sep 17 00:00:00 2001 From: w-bonelli Date: Thu, 3 Aug 2023 09:23:08 -0400 Subject: [PATCH 028/101] refactor(vtk): remove deprecated export_* functions (#1890) --- flopy/export/utils.py | 26 +-- flopy/export/vtk.py | 528 ------------------------------------------ flopy/mf6/mfmodel.py | 2 +- 3 files changed, 6 insertions(+), 550 deletions(-) diff --git a/flopy/export/utils.py b/flopy/export/utils.py index 81d02f0771..e8266787a4 100644 --- a/flopy/export/utils.py +++ b/flopy/export/utils.py @@ -605,7 +605,7 @@ def model_export( prjfile : str or pathlike, optional if `crs` is specified ESRI-style projection file with well-known text defining the CRS for the model grid (must be projected; geographic CRS are not supported). - if fmt is set to 'vtk', parameters of vtk.export_model + if fmt is set to 'vtk', parameters of Vtk initializer """ assert isinstance(ml, ModelInterface) @@ -698,7 +698,7 @@ def package_export( prjfile : str or pathlike, optional if `crs` is specified ESRI-style projection file with well-known text defining the CRS for the model grid (must be projected; geographic CRS are not supported). - if fmt is set to 'vtk', parameters of vtk.export_package + if fmt is set to 'vtk', parameters of Vtk initializer Returns ------- @@ -771,22 +771,6 @@ def package_export( vtkobj.add_package(pak, masked_values=masked_values) vtkobj.write(os.path.join(f, name), kper=kpers) - - """ - vtk.export_package( - pak.parent, - pak.name, - f, - nanval=nanval, - smooth=smooth, - point_scalars=point_scalars, - vtk_grid_type=vtk_grid_type, - true2d=true2d, - binary=binary, - kpers=kpers, - ) - """ - else: raise NotImplementedError(f"unrecognized export argument:{f}") @@ -1049,7 +1033,7 @@ def transient2d_export(f: Union[str, os.PathLike], t2d, fmt=None, **kwargs): max_valid : maximum valid value modelgrid : flopy.discretization.Grid model grid instance which will supercede the flopy.model.modelgrid - if fmt is set to 'vtk', parameters of vtk.export_transient + if fmt is set to 'vtk', parameters of Vtk initializer """ @@ -1209,7 +1193,7 @@ def array3d_export(f: Union[str, os.PathLike], u3d, fmt=None, **kwargs): max_valid : maximum valid value modelgrid : flopy.discretization.Grid model grid instance which will supercede the flopy.model.modelgrid - if fmt is set to 'vtk', parameters of vtk.export_array + if fmt is set to 'vtk', parameters of Vtk initializer """ @@ -1387,7 +1371,7 @@ def array2d_export( max_valid : maximum valid value modelgrid : flopy.discretization.Grid model grid instance which will supercede the flopy.model.modelgrid - if fmt is set to 'vtk', parameters of vtk.export_array + if fmt is set to 'vtk', parameters of Vtk initializer """ assert isinstance( diff --git a/flopy/export/vtk.py b/flopy/export/vtk.py index 45fa2d6633..fd9cbf1232 100644 --- a/flopy/export/vtk.py +++ b/flopy/export/vtk.py @@ -1500,531 +1500,3 @@ def __create_transient_vtk_path(self, path, kper): {:06d} represents the six zero padded stress period time """ return path.parent / f"{path.stem.rstrip('_')}_{kper:06d}{path.suffix}" - - -def export_model( - model, - otfolder: Union[str, os.PathLike], - package_names=None, - nanval=-1e20, - smooth=False, - point_scalars=False, - vtk_grid_type="auto", - true2d=False, - binary=True, - kpers=None, -): - """ - Export model to vtk - - .. deprecated:: 3.3.5 - Use :meth:`Vtk.add_model` - - Parameters - ---------- - model : flopy model instance - flopy model - otfolder : str or PathLike - output folder - package_names : list - list of package names to be exported - nanval : scalar - no data value, default value is -1e20 - array2d : bool - True if array is 2d, default is False - smooth : bool - if True, will create smooth layer elevations, default is False - point_scalars : bool - if True, will also output array values at cell vertices, default is - False; note this automatically sets smooth to True - vtk_grid_type : str - Specific vtk_grid_type or 'auto' (default). Possible specific values - are 'ImageData', 'RectilinearGrid', and 'UnstructuredGrid'. - If 'auto', the grid type is automatically determined. Namely: - * A regular grid (in all three directions) will be saved as an - 'ImageData'. - * A rectilinear (in all three directions), non-regular grid - will be saved as a 'RectilinearGrid'. - * Other grids will be saved as 'UnstructuredGrid'. - true2d : bool - If True, the model is expected to be 2d (1 layer, 1 row or 1 column) - and the data will be exported as true 2d data, default is False. - binary : bool - if True the output file will be binary, default is False - kpers : iterable of int - Stress periods to export. If None (default), all stress periods will be - exported. - """ - warnings.warn("export_model is deprecated, please use Vtk.add_model()") - - if nanval != -1e20: - warnings.warn("nanval is deprecated, setting to np.nan") - if true2d: - warnings.warn("true2d is no longer supported, setting to False") - if vtk_grid_type != "auto": - warnings.warn("vtk_grid_type is deprecated, setting to binary") - - vtk = Vtk( - model, - vertical_exageration=1, - binary=binary, - smooth=smooth, - point_scalars=point_scalars, - ) - - vtk.add_model(model, package_names) - - if not os.path.exists(otfolder): - os.mkdir(otfolder) - - name = model.name - vtk.write(os.path.join(otfolder, name), kper=kpers) - - -def export_package( - pak_model, - pak_name, - otfolder: Union[str, os.PathLike], - vtkobj=None, - nanval=-1e20, - smooth=False, - point_scalars=False, - vtk_grid_type="auto", - true2d=False, - binary=True, - kpers=None, -): - """ - Export package to vtk - - .. deprecated:: 3.3.5 - Use :meth:`Vtk.add_package` - - Parameters - ---------- - pak_model : flopy model instance - the model of the package - pak_name : str - the name of the package - otfolder : str or PathLike - output folder to write the data - vtkobj : VTK instance - a vtk object (allows export_package to be called from - export_model) - nanval : scalar - no data value, default value is -1e20 - smooth : bool - if True, will create smooth layer elevations, default is False - point_scalars : bool - if True, will also output array values at cell vertices, default is - False; note this automatically sets smooth to True - vtk_grid_type : str - Specific vtk_grid_type or 'auto' (default). Possible specific values - are 'ImageData', 'RectilinearGrid', and 'UnstructuredGrid'. - If 'auto', the grid type is automatically determined. Namely: - * A regular grid (in all three directions) will be saved as an - 'ImageData'. - * A rectilinear (in all three directions), non-regular grid - will be saved as a 'RectilinearGrid'. - * Other grids will be saved as 'UnstructuredGrid'. - true2d : bool - If True, the model is expected to be 2d (1 layer, 1 row or 1 column) - and the data will be exported as true 2d data, default is False. - binary : bool - if True the output file will be binary, default is False - kpers : iterable of int - Stress periods to export. If None (default), all stress periods will be - exported. - """ - warnings.warn("export_package is deprecated, use Vtk.add_package()") - - if nanval != -1e20: - warnings.warn("nanval is deprecated, setting to np.nan") - if true2d: - warnings.warn("true2d is no longer supported, setting to False") - if vtk_grid_type != "auto": - warnings.warn("vtk_grid_type is deprecated, setting to binary") - - if not vtkobj: - vtk = Vtk( - pak_model, - binary=binary, - smooth=smooth, - point_scalars=point_scalars, - ) - else: - vtk = vtkobj - - if not os.path.exists(otfolder): - os.mkdir(otfolder) - - if isinstance(pak_name, list): - pak_name = pak_name[0] - - p = pak_model.get_package(pak_name) - vtk.add_package(p) - - vtk.write(os.path.join(otfolder, pak_name), kper=kpers) - - -def export_transient( - model, - array, - output_folder: Union[str, os.PathLike], - name, - nanval=-1e20, - array2d=False, - smooth=False, - point_scalars=False, - vtk_grid_type="auto", - true2d=False, - binary=True, - kpers=None, -): - """ - Export transient arrays and lists to vtk - - .. deprecated:: 3.3.5 - Use :meth:`Vtk.add_transient_array` or :meth:`Vtk.add_transient_list` - - Parameters - ---------- - model : MFModel - the flopy model instance - array : Transient instance - flopy transient array - output_folder : str or PathLike - output folder to write the data - name : str - name of array - nanval : scalar - no data value, default value is -1e20 - array2d : bool - True if array is 2d, default is False - smooth : bool - if True, will create smooth layer elevations, default is False - point_scalars : bool - if True, will also output array values at cell vertices, default is - False; note this automatically sets smooth to True - vtk_grid_type : str - Specific vtk_grid_type or 'auto' (default). Possible specific values - are 'ImageData', 'RectilinearGrid', and 'UnstructuredGrid'. - If 'auto', the grid type is automatically determined. Namely: - * A regular grid (in all three directions) will be saved as an - 'ImageData'. - * A rectilinear (in all three directions), non-regular grid - will be saved as a 'RectilinearGrid'. - * Other grids will be saved as 'UnstructuredGrid'. - true2d : bool - If True, the model is expected to be 2d (1 layer, 1 row or 1 column) - and the data will be exported as true 2d data, default is False. - binary : bool - if True the output file will be binary, default is False - kpers : iterable of int - Stress periods to export. If None (default), all stress periods will be - exported. - """ - warnings.warn( - "export_transient is deprecated, use Vtk.add_transient_array() or " - "Vtk.add_transient_list()" - ) - - if nanval != -1e20: - warnings.warn("nanval is deprecated, setting to np.nan") - if true2d: - warnings.warn("true2d is no longer supported, setting to False") - if vtk_grid_type != "auto": - warnings.warn("vtk_grid_type is deprecated, setting to binary") - if array2d: - warnings.warn( - "array2d parameter is deprecated, 2d arrays are " - "handled automatically" - ) - - if not os.path.exists(output_folder): - os.mkdir(output_folder) - - vtk = Vtk(model, binary=binary, smooth=smooth, point_scalars=point_scalars) - - if array.data_type == DataType.transient2d: - if array.array is not None: - if hasattr(array, "transient_2ds"): - vtk.add_transient_array(array.transient_2ds, name) - else: - d = {ix: i for ix, i in enumerate(array.array)} - vtk.add_transient_array(d, name) - - elif array.data_type == DataType.transient3d: - if array.array is None: - vtk.add_transient_array(array.transient_3ds, name) - - elif array.data_type == DataType.transientlist: - vtk.add_transient_list(array) - - else: - raise TypeError(f"type {type(array)} not valid for export_transient") - - vtk.write(os.path.join(output_folder, name), kper=kpers) - - -def export_array( - model, - array, - output_folder: Union[str, os.PathLike], - name, - nanval=-1e20, - array2d=False, - smooth=False, - point_scalars=False, - vtk_grid_type="auto", - true2d=False, - binary=True, -): - """ - Export array to vtk - - .. deprecated:: 3.3.5 - Use :meth:`Vtk.add_array` - - Parameters - ---------- - model : flopy model instance - the flopy model instance - array : flopy array - flopy 2d or 3d array - output_folder : str or PathLike - output folder to write the data - name : str - name of array - nanval : scalar - no data value, default value is -1e20 - array2d : bool - true if the array is 2d and represents the first layer, default is - False - smooth : bool - if True, will create smooth layer elevations, default is False - point_scalars : bool - if True, will also output array values at cell vertices, default is - False; note this automatically sets smooth to True - vtk_grid_type : str - Specific vtk_grid_type or 'auto' (default). Possible specific values - are 'ImageData', 'RectilinearGrid', and 'UnstructuredGrid'. - If 'auto', the grid type is automatically determined. Namely: - * A regular grid (in all three directions) will be saved as an - 'ImageData'. - * A rectilinear (in all three directions), non-regular grid - will be saved as a 'RectilinearGrid'. - * Other grids will be saved as 'UnstructuredGrid'. - true2d : bool - If True, the model is expected to be 2d (1 layer, 1 row or 1 column) - and the data will be exported as true 2d data, default is False. - binary : bool - if True the output file will be binary, default is False - """ - warnings.warn("export_array is deprecated, please use Vtk.add_array()") - - if nanval != -1e20: - warnings.warn("nanval is deprecated, setting to np.nan") - if true2d: - warnings.warn("true2d is no longer supported, setting to False") - if vtk_grid_type != "auto": - warnings.warn("vtk_grid_type is deprecated, setting to binary") - if array2d: - warnings.warn( - "array2d parameter is deprecated, 2d arrays are " - "handled automatically" - ) - - if not os.path.exists(output_folder): - os.mkdir(output_folder) - - if array.size < model.modelgrid.nnodes: - if array.size < model.modelgrid.ncpl: - raise AssertionError( - "Array size must be equal to either ncpl or nnodes" - ) - - array = np.zeros(model.modelgrid.nnodes) * np.nan - array[: array.size] = np.ravel(array) - - vtk = Vtk(model, binary=binary, smooth=smooth, point_scalars=point_scalars) - - vtk.add_array(array, name) - vtk.write(os.path.join(output_folder, name)) - - -def export_heads( - model, - hdsfile, - otfolder, - text="head", - precision="auto", - verbose=False, - nanval=-1e20, - kstpkper=None, - smooth=False, - point_scalars=False, - vtk_grid_type="auto", - true2d=False, - binary=True, -): - """ - Exports binary head file to vtk - - .. deprecated:: 3.3.5 - Use :meth:`Vtk.add_heads` - - Parameters - ---------- - model : MFModel - the flopy model instance - hdsfile : str, HeadFile object - binary head file path or object - otfolder : str - output folder to write the data - text : string - Name of the text string in the head file. Default is 'head'. - precision : str - Precision of data in the head file: 'auto', 'single' or 'double'. - Default is 'auto'. - verbose : bool - If True, write information to the screen. Default is False. - nanval : scalar - no data value, default value is -1e20 - kstpkper : tuple of ints or list of tuple of ints - A tuple containing the time step and stress period (kstp, kper). - The kstp and kper values are zero based. - smooth : bool - if True, will create smooth layer elevations, default is False - point_scalars : bool - if True, will also output array values at cell vertices, default is - False; note this automatically sets smooth to True - vtk_grid_type : str - Specific vtk_grid_type or 'auto' (default). Possible specific values - are 'ImageData', 'RectilinearGrid', and 'UnstructuredGrid'. - If 'auto', the grid type is automatically determined. Namely: - * A regular grid (in all three directions) will be saved as an - 'ImageData'. - * A rectilinear (in all three directions), non-regular grid - will be saved as a 'RectilinearGrid'. - * Other grids will be saved as 'UnstructuredGrid'. - true2d : bool - If True, the model is expected to be 2d (1 layer, 1 row or 1 column) - and the data will be exported as true 2d data, default is False. - binary : bool - if True the output file will be binary, default is False - """ - warnings.warn("export_heads is deprecated, use Vtk.add_heads()") - - if nanval != -1e20: - warnings.warn("nanval is deprecated, setting to np.nan") - if true2d: - warnings.warn("true2d is no longer supported, setting to False") - if vtk_grid_type != "auto": - warnings.warn("vtk_grid_type is deprecated, setting to binary") - - from ..utils import HeadFile - - if not os.path.exists(otfolder): - os.mkdir(otfolder) - - if not isinstance(hdsfile, HeadFile): - hds = HeadFile( - hdsfile, text=text, precision=precision, verbose=verbose - ) - else: - hds = hdsfile - - vtk = Vtk(model, binary=binary, smooth=smooth, point_scalars=point_scalars) - - vtk.add_heads(hds, kstpkper) - name = f"{model.name}_{text}" - vtk.write(os.path.join(otfolder, name)) - - -def export_cbc( - model, - cbcfile, - otfolder, - precision="single", - verbose=False, - nanval=-1e20, - kstpkper=None, - text=None, - smooth=False, - point_scalars=False, - vtk_grid_type="auto", - true2d=False, - binary=True, -): - """ - Exports cell by cell file to vtk - - .. deprecated:: 3.3.5 - Use :meth:`Vtk.add_cell_budget` - - Parameters - ---------- - model : flopy model instance - the flopy model instance - cbcfile : str - the cell by cell file - otfolder : str - output folder to write the data - precision : str - Precision of data in the cell by cell file: 'single' or 'double'. - Default is 'single'. - verbose : bool - If True, write information to the screen. Default is False. - nanval : scalar - no data value - kstpkper : tuple of ints or list of tuple of ints - A tuple containing the time step and stress period (kstp, kper). - The kstp and kper values are zero based. - text : str or list of str - The text identifier for the record. Examples include - 'RIVER LEAKAGE', 'STORAGE', 'FLOW RIGHT FACE', etc. - smooth : bool - if True, will create smooth layer elevations, default is False - point_scalars : bool - if True, will also output array values at cell vertices, default is - False; note this automatically sets smooth to True - vtk_grid_type : str - Specific vtk_grid_type or 'auto' (default). Possible specific values - are 'ImageData', 'RectilinearGrid', and 'UnstructuredGrid'. - If 'auto', the grid type is automatically determined. Namely: - * A regular grid (in all three directions) will be saved as an - 'ImageData'. - * A rectilinear (in all three directions), non-regular grid - will be saved as a 'RectilinearGrid'. - * Other grids will be saved as 'UnstructuredGrid'. - true2d : bool - If True, the model is expected to be 2d (1 layer, 1 row or 1 column) - and the data will be exported as true 2d data, default is False. - binary : bool - if True the output file will be binary, default is False - """ - warnings.warn("export_cbc is deprecated, use Vtk.add_cell_budget()") - - if nanval != -1e20: - warnings.warn("nanval is deprecated, setting to np.nan") - if true2d: - warnings.warn("true2d is no longer supported, setting to False") - if vtk_grid_type != "auto": - warnings.warn("vtk_grid_type is deprecated, setting to binary") - - from ..utils import CellBudgetFile - - if not os.path.exists(otfolder): - os.mkdir(otfolder) - - if not isinstance(cbcfile, CellBudgetFile): - cbc = CellBudgetFile(cbcfile, precision=precision, verbose=verbose) - else: - cbc = cbcfile - - vtk = Vtk(model, binary=binary, smooth=smooth, point_scalars=point_scalars) - - vtk.add_cell_budget(cbc, text, kstpkper) - fname = f"{model.name}_CBC" - vtk.write(os.path.join(otfolder, fname)) diff --git a/flopy/mf6/mfmodel.py b/flopy/mf6/mfmodel.py index a62fcef9eb..b576ab3c50 100644 --- a/flopy/mf6/mfmodel.py +++ b/flopy/mf6/mfmodel.py @@ -640,7 +640,7 @@ def export(self, f, **kwargs): modelgrid: flopy.discretization.Grid User supplied modelgrid object which will supercede the built in modelgrid object - if fmt is set to 'vtk', parameters of vtk.export_model + if fmt is set to 'vtk', parameters of Vtk initializer """ from ..export import utils From cf5e498838754de1c1fb941bd66851ad272eb481 Mon Sep 17 00:00:00 2001 From: w-bonelli Date: Thu, 3 Aug 2023 13:57:08 -0400 Subject: [PATCH 029/101] test: add requires_pkg marker to model splitter test (#1902) --- autotest/test_model_splitter.py | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/autotest/test_model_splitter.py b/autotest/test_model_splitter.py index d475bdd918..08cbb42c5d 100644 --- a/autotest/test_model_splitter.py +++ b/autotest/test_model_splitter.py @@ -1,7 +1,7 @@ import numpy as np import pytest from autotest.conftest import get_example_data_path -from modflow_devtools.markers import requires_exe +from modflow_devtools.markers import requires_exe, requires_pkg from flopy.mf6 import MFSimulation from flopy.mf6.utils import Mf6Splitter @@ -155,6 +155,7 @@ def test_model_with_lak_sfr_mvr(function_tmpdir): np.testing.assert_allclose(new_heads, original_heads, err_msg=err_msg) +@requires_pkg("pymetis") @requires_exe("mf6") @pytest.mark.slow def test_metis_splitting_with_lak_sfr(function_tmpdir): From 9a22f639865b9d81716f7fcecdfe33b3e5a352a2 Mon Sep 17 00:00:00 2001 From: Ralf Junghanns Date: Thu, 3 Aug 2023 23:03:21 +0200 Subject: [PATCH 030/101] fix(mtlistfile): fix reading MT3D budget (#1899) * fix character span reading tkstp with a regex match * skip particle information reading GW-Mass Budget Co-authored-by: Ralf Junghanns --- autotest/test_listbudget.py | 4 + examples/data/mt3d_test/mt3d_with_adv.list | 330 +++++++++++++++++++++ flopy/utils/mtlistfile.py | 44 ++- 3 files changed, 374 insertions(+), 4 deletions(-) create mode 100644 examples/data/mt3d_test/mt3d_with_adv.list diff --git a/autotest/test_listbudget.py b/autotest/test_listbudget.py index 7f39e8d089..a9d7ce7929 100644 --- a/autotest/test_listbudget.py +++ b/autotest/test_listbudget.py @@ -116,6 +116,10 @@ def test_mtlist(example_data_path): mt = MtListBudget(mt_dir / "mcomp.list") df_gw, df_sw = mt.parse(forgive=False, diff=False, start_datetime=None) + mt_dir = example_data_path / "mt3d_test" + mt = MtListBudget(mt_dir / "mt3d_with_adv.list") + df_gw, df_sw = mt.parse(forgive=False, diff=False, start_datetime=None) + mt_dir = example_data_path / "mt3d_test" mt = MtListBudget(mt_dir / "CrnkNic.mt3d.list") df_gw, df_sw = mt.parse(forgive=False, diff=True, start_datetime=None) diff --git a/examples/data/mt3d_test/mt3d_with_adv.list b/examples/data/mt3d_test/mt3d_with_adv.list new file mode 100644 index 0000000000..daf39ef40c --- /dev/null +++ b/examples/data/mt3d_test/mt3d_with_adv.list @@ -0,0 +1,330 @@ + LISTING FILE: mt.list + UNIT 16 + + OPENING mt3d_link.ftl + FILE TYPE:FTL UNIT 10 + + OPENING mt.btn + FILE TYPE:BTN UNIT 31 + + OPENING mt.adv + FILE TYPE:ADV UNIT 32 + + OPENING mt.dsp + FILE TYPE:DSP UNIT 33 + + OPENING mt.gcg + FILE TYPE:GCG UNIT 35 + + OPENING mt.ssm + FILE TYPE:SSM UNIT 34 + + OPENING mt.rct + FILE TYPE:RCT UNIT 36 + + +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ + + + + + MT3DMS + + + A Modular 3D Multi-Species Transport Model + + + For Simulation of Advection, Dispersion and Chemical Reactions + + + of Contaminants in Groundwater Systems + + + + + +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ + + ----- + | M T | ## BTN for MT3DMS, generated by Flopy. + | 3 D | ## + ----- + THE TRANSPORT MODEL CONSISTS OF 1 LAYER(S) 31 ROW(S) 31 COLUMN(S) + NUMBER OF STRESS PERIOD(S) FOR TRANSPORT SIMULATION = 1 + NUMBER OF ALL COMPONENTS INCLUDED IN SIMULATION = 1 + NUMBER OF MOBILE COMPONENTS INCLUDED IN SIMULATION = 1 + UNIT FOR TIME IS D ; UNIT FOR LENGTH IS M ; UNIT FOR MASS IS KG + OPTIONAL PACKAGES INCLUDED IN CURRENT SIMULATION: + o ADV ON UNIT 32 + o DSP ON UNIT 33 + o SSM ON UNIT 34 + o RCT ON UNIT 36 + o GCG ON UNIT 35 + + BTN5 -- BASIC TRANSPORT PACKAGE, VERSION 5, FEBRUARY 2010, INPUT READ FROM UNIT 31 + 15500 ELEMENTS OF THE X ARRAY USED BY THE BTN PACKAGE + 962 ELEMENTS OF THE IX ARRAY USED BY THE BTN PACKAGE + + FMI5 -- FLOW MODEL INTERFACE PACKAGE, VERSION 5, FEBRUARY 2010, INPUT READ FROM UNIT 10 + FLOW MODEL IS STEADY-STATE + + ADV5 -- ADVECTION PACKAGE, VERSION 5, FEBRUARY 2010, INPUT READ FROM UNIT 32 + ADVECTION IS SOLVED WITH THE HYBRID [MOC]/[MMOC] SCHEME + COURANT NUMBER ALLOWED IN SOLVING THE ADVECTION TERM = 0.750 + MAXIMUM NUMBER OF MOVING PARTICLES ALLOWED = 800000 + 3200000 ELEMENTS OF THE X ARRAY USED BY THE ADV PACKAGE + 1600961 ELEMENTS OF THE IX ARRAY USED BY THE ADV PACKAGE + + DSP5 -- DISPERSION PACKAGE, VERSION 5, FEBRUARY 2010, INPUT READ FROM UNIT 33 + 10573 ELEMENTS OF THE X ARRAY USED BY THE DSP PACKAGE + 0 ELEMENTS OF THE IX ARRAY USED BY THE DSP PACKAGE + + SSM5 -- SINK & SOURCE MIXING PACKAGE, VERSION 5, FEBRUARY 2010, INPUT READ FROM UNIT 34 + HEADER LINE OF THE SSM PACKAGE INPUT FILE: + T F F F F F F F F F F F F F F F + MAJOR STRESS COMPONENTS PRESENT IN THE FLOW MODEL: + o WELL [WEL] + MAXIMUM NUMBER OF POINT SINKS/SOURCES = 121 + 1573 ELEMENTS OF THE X ARRAY USED BY THE SSM PACKAGE + 0 ELEMENTS OF THE IX ARRAY BY THE SSM PACKAGE + + RCT5 -- CHEMICAL REACTION PACKAGE, VERSION 5, FEBRUARY 2010, INPUT READ FROM UNIT 36 + NO SORPTION [OR DUAL-DOMAIN MODEL] IS SIMULATED + NO FIRST-ORDER RATE REACTION IS SIMULATED + REACTION COEFFICIENTS ASSIGNED CELL-BY-CELL + INITIAL SORBED/IMMOBILE PHASE CONCENTRATION ASSIGNED BY DEFAULT + 0 ELEMENTS OF THE X ARRAY USED BY THE RCT PACKAGE + 0 ELEMENTS OF THE IX ARRAY USED BY THE RCT PACKAGE + + GCG5 -- GENERALIZED CONJUGATE GRADIENT SOLVER PACKAGE, VERSION 5, FEBRUARY 2010 INPUT READ FROM UNIT 35 + MAXIMUM OF 1 OUTER ITERATIONS + AND 50 INNER ITERATIONS ALLOWED FOR CLOSURE + THE PRECONDITIONING TYPE SELECTED IS MODIFIED INCOMPLETE CHOLESKY (MIC). + DISPERSION CROSS TERMS LUMPED INTO RIGHT-HAND-SIDE + 21192 ELEMENTS OF THE X ARRAY USED BY THE GCG PACKAGE + 150 ELEMENTS OF THE IX ARRAY USED BY THE GCG PACKAGE + + .......................................... + ELEMENTS OF THE X ARRAY USED = 3248839 + ELEMENTS OF THE IX ARRAY USED = 1602074 + .......................................... + + LAYER NUMBER AQUIFER TYPE + ------------ ------------ + 1 0 + + WIDTH ALONG ROWS (DELR) READ ON UNIT 31 USING FORMAT: " (31E15.6) " + ----------------------------------------------------------------------------- + + WIDTH ALONG COLS (DELC) READ ON UNIT 31 USING FORMAT: " (31E15.6) " + ----------------------------------------------------------------------------- + TOP ELEV. OF 1ST LAYER = 10.00000 + + CELL THICKNESS (DZ) FOR LAYER 1 READ ON UNIT 31 USING FORMAT: " (31E15.6) " + ------------------------------------------------------------------------------------------ + POROSITY = 0.3000000 FOR LAYER 1 + CONCN. BOUNDARY ARRAY = 1 FOR LAYER 1 + INITIAL CONC.: COMP. 01 = 0.000000 FOR LAYER 1 + + VALUE INDICATING INACTIVE CONCENTRATION CELLS = 0.1000000E+31 + MINIMUM SATURATED THICKNESS [THKMIN] ALLOWED = 0.0100 OF TOTAL CELL THICKNESS + + + OUTPUT CONTROL OPTIONS + ---------------------- + + DO NOT PRINT CELL CONCENTRATION + DO NOT PRINT PARTICLE NUMBER IN EACH CELL + DO NOT PRINT RETARDATION FACTOR + DO NOT PRINT DISPERSION COEFFICIENT + SAVE DISSOLVED PHASE CONCENTRATIONS IN UNFORMATTED FILES [MT3Dnnn.UCN] + FOR EACH SPECIES ON UNITS 201 AND ABOVE + + NUMBER OF TIMES AT WHICH SIMULATION RESULTS ARE SAVED = 0 + + NUMBER OF OBSERVATION POINTS = 0 + + SAVE ONE-LINE SUMMARY OF MASS BUDGETS IN FILES [MT3Dnnn.MAS] + FOR EACH SPECIES ON UNITS 601 AND ABOVE, EVERY 1 TRANSPORT STEPS + + MAXIMUM LENGTH ALONG THE X (J) AXIS = 311.0040 + MAXIMUM LENGTH ALONG THE Y (I) AXIS = 313.7939 + MAXIMUM LENGTH ALONG THE Z (K) AXIS = 1.000000 + + + ADVECTION SOLUTION OPTIONS + -------------------------- + + ADVECTION IS SOLVED WITH THE HYBRID [MOC]/[MMOC] SCHEME + COURANT NUMBER ALLOWED IN SOLVING THE ADVECTION TERM = 0.750 + MAXIMUM NUMBER OF MOVING PARTICLES ALLOWED = 800000 + METHOD FOR PARTICLE TRACKING IS [MIXED ORDER] + CONCENTRATION WEIGHTING FACTOR [WD] = 0.500 + THE CONCENTRATION GRADIENT CONSIDERED NEGLIGIBLE [DCEPS] = 0.1000000E-04 + INITIAL PARTICLES ARE PLACED ON 2 VERTICAL PLANE(S) WITHIN CELL BLOCK + PARTICLE NUMBER PER CELL IF DCCELL =< DCEPS = 10 + PARTICLE NUMBER PER CELL IF DCCELL > DCEPS = 40 + MINIMUM PARTICLE NUMBER ALLOWD PER CELL = 5 + MAXIMUM PARTICLE NUMBER ALLOWD PER CELL = 80 + MULTIPLIER OF PARTICLE NUMBER AT SOURCE = 1.00 + SCHEME FOR CONCENTRATION INTERPOLATION IS [LINEAR] + PARTICLES FOR APPROXIMATING A SINK CELL IN THE [MMOC] SCHEME + ARE PLACED RANDOMLY WITHIN CELL BLOCK + NUMBER OF PARTICLES USED TO APPROXIMATE A SINK CELL IN THE [MMOC] SCHEME = 15 + CRITICAL CONCENTRATION GRADIENT USED IN THE "HMOC" SCHEME [DCHMOC] = 0.1000E-03 + THE "MOC" SOLUTION IS USED WHEN DCCELL > DCHMOC + THE "MMOC" SOLUTION IS USED WHEN DCCELL =< DCHMOC + + + DISPERSION INPUT PARAMETERS + --------------------------- + + LONG. DISPERSIVITY (AL) = 0.1000000E-01 FOR LAYER 1 + H. TRANS./LONG. DISP. = 0.1000000 + V. TRANS./LONG. DISP. = 0.1000000E-01 + DIFFUSION COEFFICIENT = 0.1000000E-08 + + + SORPTION AND 1ST/0TH ORDER REACTION PARAMETERS + ---------------------------------------------- + + + + + SOLUTION BY THE GENERALIZED CONJUGATE GRADIENT METHOD + ----------------------------------------------------- + MAXIMUM OUTER ITERATIONS ALLOWED FOR CLOSURE = 1 + MAXIMUM INNER ITERATIONS ALLOWED FOR CLOSURE = 50 + PRECONDITIONING TYPE SELECTED = 3 + ACCELERATION PARAMETER = 1.0000 + CONCENTRATION CHANGE CRITERION FOR CLOSURE = 0.10000E-04 + GCG CONCENTRATION CHANGE PRINTOUT INTERVAL = 999 + + + ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ + STRESS PERIOD NO. 001 + ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ + + + LENGTH OF CURRENT STRESS PERIOD = 26.00000 + NUMBER OF TIME STEPS FOR CURRENT STRESS PERIOD = 1 + TIME STEP MULTIPLIER USED IN FLOW SOLUTION = 1.000000 + + ***Type of Transport Simulation is TRANSIENT + + USER-SPECIFIED TRANSPORT STEPSIZE = 0.000000 D + MAXIMUM NUMBER OF TRANSPORT STEPS ALLOWED IN ONE FLOW TIME STEP = 50000 + MULTIPLIER FOR SUCCESSIVE TRANSPORT STEPS [USED IN IMPLICIT SCHEMES] = 1.000 + MAXIMUM TRANSPORT STEP SIZE [USED IN IMPLICIT SCHEMES] = 0.000000 D + + NO LAYER ROW COLUMN CONCENTRATION TYPE COMPONENT + 1 1 16 16 50.00000 WELL 1 + + + ================================================ + TIME STEP NO. 001 + ================================================ + + FROM TIME = 0.0000 TO 26.000 + + + "THKSAT " FLOW TERMS FOR TIME STEP 1, STRESS PERIOD 1 READ UNFORMATTED ON UNIT 10 + -------------------------------------------------------------------------------------------- + + "QXX " FLOW TERMS FOR TIME STEP 1, STRESS PERIOD 1 READ UNFORMATTED ON UNIT 10 + -------------------------------------------------------------------------------------------- + + "QYY " FLOW TERMS FOR TIME STEP 1, STRESS PERIOD 1 READ UNFORMATTED ON UNIT 10 + -------------------------------------------------------------------------------------------- + + MAXIMUM STEPSIZE DURING WHICH ANY PARTICLE CANNOT MOVE MORE THAN ONE CELL + = 1.666 (WHEN MIN. R.F.=1) AT K= 1, I= 16, J= 15 + + MAXIMUM STEPSIZE WHICH MEETS STABILITY CRITERION OF THE ADVECTION TERM + (FOR PURE FINITE-DIFFERENCE OPTION, MIXELM=0) + = 0.6094 (WHEN MIN. R.F.=1) AT K= 1, I= 16, J= 16 + + "CNH " FLOW TERMS FOR TIME STEP 1, STRESS PERIOD 1 READ UNFORMATTED ON UNIT 10 + -------------------------------------------------------------------------------------------- + + "WEL " FLOW TERMS FOR TIME STEP 1, STRESS PERIOD 1 READ UNFORMATTED ON UNIT 10 + -------------------------------------------------------------------------------------------- + + + TOTAL NUMBER OF POINT SOURCES/SINKS PRESENT IN THE FLOW MODEL = 121 + + MAXIMUM STEPSIZE WHICH MEETS STABILITY CRITERION OF THE SINK & SOURCE TERM + = 0.3047 (WHEN MIN. R.F.=1) AT K= 1, I= 16, J= 16 + + MAXIMUM STEPSIZE WHICH MEETS STABILITY CRITERION OF THE DISPERSION TERM + = 307.1 (WHEN MIN. R.F.=1) AT K= 1, I= 16, J= 16 + + + 1 CALLS TO GCG PACKAGE FOR TRANSPORT TIME STEP 1 IN FLOW TIME STEP 1 STRESS PERIOD 1 + 2 TOTAL ITERATIONS + 1 CALLS TO GCG PACKAGE FOR TRANSPORT TIME STEP 2 IN FLOW TIME STEP 1 STRESS PERIOD 1 + 2 TOTAL ITERATIONS + 1 CALLS TO GCG PACKAGE FOR TRANSPORT TIME STEP 3 IN FLOW TIME STEP 1 STRESS PERIOD 1 + 2 TOTAL ITERATIONS + 1 CALLS TO GCG PACKAGE FOR TRANSPORT TIME STEP 4 IN FLOW TIME STEP 1 STRESS PERIOD 1 + 2 TOTAL ITERATIONS + 1 CALLS TO GCG PACKAGE FOR TRANSPORT TIME STEP 5 IN FLOW TIME STEP 1 STRESS PERIOD 1 + 2 TOTAL ITERATIONS + 1 CALLS TO GCG PACKAGE FOR TRANSPORT TIME STEP 6 IN FLOW TIME STEP 1 STRESS PERIOD 1 + 2 TOTAL ITERATIONS + 1 CALLS TO GCG PACKAGE FOR TRANSPORT TIME STEP 7 IN FLOW TIME STEP 1 STRESS PERIOD 1 + 2 TOTAL ITERATIONS + 1 CALLS TO GCG PACKAGE FOR TRANSPORT TIME STEP 8 IN FLOW TIME STEP 1 STRESS PERIOD 1 + 2 TOTAL ITERATIONS + 1 CALLS TO GCG PACKAGE FOR TRANSPORT TIME STEP 9 IN FLOW TIME STEP 1 STRESS PERIOD 1 + 2 TOTAL ITERATIONS + 1 CALLS TO GCG PACKAGE FOR TRANSPORT TIME STEP 10 IN FLOW TIME STEP 1 STRESS PERIOD 1 + 2 TOTAL ITERATIONS + 1 CALLS TO GCG PACKAGE FOR TRANSPORT TIME STEP 11 IN FLOW TIME STEP 1 STRESS PERIOD 1 + 2 TOTAL ITERATIONS + 1 CALLS TO GCG PACKAGE FOR TRANSPORT TIME STEP 12 IN FLOW TIME STEP 1 STRESS PERIOD 1 + 2 TOTAL ITERATIONS + 1 CALLS TO GCG PACKAGE FOR TRANSPORT TIME STEP 13 IN FLOW TIME STEP 1 STRESS PERIOD 1 + 2 TOTAL ITERATIONS + 1 CALLS TO GCG PACKAGE FOR TRANSPORT TIME STEP 14 IN FLOW TIME STEP 1 STRESS PERIOD 1 + 2 TOTAL ITERATIONS + 1 CALLS TO GCG PACKAGE FOR TRANSPORT TIME STEP 15 IN FLOW TIME STEP 1 STRESS PERIOD 1 + 2 TOTAL ITERATIONS + 1 CALLS TO GCG PACKAGE FOR TRANSPORT TIME STEP 16 IN FLOW TIME STEP 1 STRESS PERIOD 1 + 2 TOTAL ITERATIONS + 1 CALLS TO GCG PACKAGE FOR TRANSPORT TIME STEP 17 IN FLOW TIME STEP 1 STRESS PERIOD 1 + 2 TOTAL ITERATIONS + 1 CALLS TO GCG PACKAGE FOR TRANSPORT TIME STEP 18 IN FLOW TIME STEP 1 STRESS PERIOD 1 + 2 TOTAL ITERATIONS + 1 CALLS TO GCG PACKAGE FOR TRANSPORT TIME STEP 19 IN FLOW TIME STEP 1 STRESS PERIOD 1 + 2 TOTAL ITERATIONS + 1 CALLS TO GCG PACKAGE FOR TRANSPORT TIME STEP 20 IN FLOW TIME STEP 1 STRESS PERIOD 1 + 2 TOTAL ITERATIONS + 1 CALLS TO GCG PACKAGE FOR TRANSPORT TIME STEP 21 IN FLOW TIME STEP 1 STRESS PERIOD 1 + 2 TOTAL ITERATIONS + MAXIMUM CONCENTRATION CHANGES FOR EACH ITERATION: + MAX. CHANGE LAYER,ROW,COL MAX. CHANGE LAYER,ROW,COL MAX. CHANGE LAYER,ROW,COL MAX. CHANGE LAYER,ROW,COL MAX. CHANGE LAYER,ROW,COL + ------------------------------------------------------------------------------------------------------------------------------------ + 0.1276E-03 ( 1, 14, 12) 0.7523E-36 ( 1, 3, 30) + + + + >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>FOR COMPONENT NO. 01<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< + + + ------------------------------------------- + TRANSPORT STEP NO. 21 + ------------------------------------------- + + TOTAL ELAPSED TIME SINCE BEGINNING OF SIMULATION = 26.00000 D + ..................................................................... + + TOTAL PARTICLES USED IN THE CURRENT STEP = 4264 + PARTICLES ADDED AT BEGINNING OF THE STEP = 256 + PARTICLES REMOVED AT END OF LAST STEP = 0 + + CUMMULATIVE MASS BUDGETS AT END OF TRANSPORT STEP 21, TIME STEP 1, STRESS PERIOD 1 + ------------------------------------------------------------------------------------------ + + IN OUT + ---------------- ---------------- + CONSTANT CONCENTRATION: 0.000000 0.000000 + CONSTANT HEAD: 0.000000 -0.1140626E-09 + WELLS: 130000.0 0.000000 + 1ST/0TH ORDER REACTION: 0.000000 0.000000 + MASS STORAGE (SOLUTE): 43.12313 -122520.2 + --------------------------------------------------------------------------- + [TOTAL]: 130043.1 KG -122520.2 KG + + NET (IN - OUT): 7522.922 + DISCREPANCY (PERCENT): 5.957257 + ----- + | M T | + | 3 D | END OF MODEL OUTPUT + ----- diff --git a/flopy/utils/mtlistfile.py b/flopy/utils/mtlistfile.py index 9f70c72b41..af682bc7a4 100644 --- a/flopy/utils/mtlistfile.py +++ b/flopy/utils/mtlistfile.py @@ -3,13 +3,12 @@ mt3d(usgs) run. Also includes support for SFT budget. """ +import re import warnings import numpy as np import pandas as pd -from ..utils import import_optional_dependency - class MtListBudget: """ @@ -24,7 +23,6 @@ class MtListBudget: Examples -------- >>> mt_list = MtListBudget("my_mt3d.list") - >>> incremental, cumulative = mt_list.get_budget() >>> gw_df, sw_df = mt_list.parse(start_datetime="10-21-2015") """ @@ -48,6 +46,7 @@ def __init__(self, file_name): self.time_key = line.lower() line = "TRANSPORT TIME STEP" self.tkstp_key = line.lower() + self.particles_key = "TOTAL PARTICLES USED IN THE CURRENT STEP" return @@ -110,7 +109,18 @@ def parse( else: self._parse_sw(f, line) elif self.tkstp_key in line: - self.tkstp_overflow = int(line[51:58]) + try: + self.tkstp_overflow = ( + self._extract_number_between_strings( + line, self.tkstp_key, "in" + ) + ) + except Exception as e: + warnings.warn( + "error parsing TKSTP key " + f"starting on line {self.lcount}: {e!s}" + ) + break if len(self.gw_data) == 0: raise Exception("no groundwater budget info found...") @@ -257,6 +267,15 @@ def _parse_gw(self, f, line): line = self._readline(f) if line is None: raise Exception("EOF while reading from totim to time step") + + if self.particles_key.lower() in line: + for _ in range(4): + line = self._readline(f) + if line is None: + raise Exception( + "EOF while reading from time step to particles" + ) + try: kper = int(line[-6:-1]) kstp = int(line[-26:-21]) @@ -472,3 +491,20 @@ def _add_to_sw_data(self, inout, item, cval, fval, comp): if iitem not in self.sw_data.keys(): self.sw_data[iitem] = [] self.sw_data[iitem].append(val) + + @staticmethod + def _extract_number_between_strings( + input_string, start_string, end_string + ): + pattern = ( + rf"{re.escape(start_string)}\s*(\d+)\s*{re.escape(end_string)}" + ) + match = re.search(pattern, input_string) + + if match: + extracted_number = int(match.group(1)) + return extracted_number + else: + raise Exception( + f"Error extracting number between {start_string} and {end_string} in {input_string}" + ) From 8084b677d9b5ebfe4f77f9c55604ea9579acfdcd Mon Sep 17 00:00:00 2001 From: w-bonelli Date: Thu, 3 Aug 2023 19:26:38 -0400 Subject: [PATCH 031/101] test(generate_classes): parametrize, allow configuring via CLI/env (#1903) --- autotest/conftest.py | 8 ++++ autotest/test_generate_classes.py | 80 +++++++++++++++++++++++-------- 2 files changed, 69 insertions(+), 19 deletions(-) diff --git a/autotest/conftest.py b/autotest/conftest.py index a9d817c8d3..b85458c044 100644 --- a/autotest/conftest.py +++ b/autotest/conftest.py @@ -118,6 +118,14 @@ def pytest_addoption(parser): "but automated tests should probably also check patch collections or figure & axis properties.)", ) + # for test_generate_classes.py + parser.addoption( + "--ref", + action="append", + type=str, + help="Include extra refs to test. Useful for testing branches on a fork, e.g. /modflow6/.", + ) + def pytest_report_header(config): """Header for pytest to show versions of packages.""" diff --git a/autotest/test_generate_classes.py b/autotest/test_generate_classes.py index 06527a56fe..e48c73eb17 100644 --- a/autotest/test_generate_classes.py +++ b/autotest/test_generate_classes.py @@ -1,15 +1,62 @@ import sys +from datetime import datetime +from os import environ from pathlib import Path from pprint import pprint +from typing import Iterable +from warnings import warn import pytest +def nonempty(itr: Iterable): + for x in itr: + if x: + yield x + + +def pytest_generate_tests(metafunc): + # defaults + ref = [ + "MODFLOW-USGS/modflow6/develop", + "MODFLOW-USGS/modflow6/master", + "MODFLOW-USGS/modflow6/6.4.1", + ] + + # refs provided as env vars override the defaults + ref_env = environ.get("TEST_GENERATE_CLASSES_REF") + if ref_env: + ref = nonempty(ref_env.strip().split(",")) + + # refs given as CLI options override everything + ref_opt = metafunc.config.getoption("--ref") + if ref_opt: + ref = nonempty([o.strip() for o in ref_opt]) + + # drop duplicates + ref = list(dict.fromkeys(ref)) + + # drop and warn refs with invalid format + # i.e. not "owner/repo/branch" + for r in ref: + spl = r.split("/") + if len(spl) != 3 or not all(spl): + warn(f"Skipping invalid ref: {r}") + ref.remove(r) + + key = "ref" + if key in metafunc.fixturenames: + metafunc.parametrize(key, ref, scope="session") + + @pytest.mark.mf6 -def test_generate_classes_from_dfn(virtualenv, project_root_path): +@pytest.mark.slow +def test_generate_classes_from_dfn(virtualenv, project_root_path, ref): python = virtualenv.python venv = Path(python).parent - print(f"Using temp venv at {venv} with python {python}") + print( + f"Using temp venv at {venv} with python {python} to test class generation from {ref}" + ) # install flopy/dependencies pprint(virtualenv.run(f"pip install {project_root_path}")) @@ -31,9 +78,10 @@ def test_generate_classes_from_dfn(virtualenv, project_root_path): mod_file_times = [Path(mod_file).stat().st_mtime for mod_file in mod_files] pprint(mod_files) - # generate classes from develop branch - owner = "MODFLOW-USGS" - branch = "develop" + # generate classes + spl = ref.split("/") + owner = spl[0] + branch = spl[2] pprint( virtualenv.run( "python -c 'from flopy.mf6.utils import generate_classes; generate_classes(owner=\"" @@ -44,31 +92,25 @@ def test_generate_classes_from_dfn(virtualenv, project_root_path): ) ) + def get_mtime(f): + try: + return Path(f).stat().st_mtime + except: + return 0 # if file not found + # make sure files were regenerated modified_files = [ mod_files[i] for i, (before, after) in enumerate( zip( mod_file_times, - [Path(mod_file).stat().st_mtime for mod_file in mod_files], + [get_mtime(f) for f in mod_files], ) ) - if after > before + if after > 0 and after > before ] assert any(modified_files) print(f"{len(modified_files)} files were modified:") pprint(modified_files) - # try with master branch - branch = "master" - pprint( - virtualenv.run( - "python -c 'from flopy.mf6.utils import generate_classes; generate_classes(owner=\"" - + owner - + '", branch="' - + branch - + "\", backup=False)'" - ) - ) - # todo checkout mf6 and test with dfnpath? test with backups? From 68f0c3bafcd2c806f22b0f75e8d18cc5c4428aa2 Mon Sep 17 00:00:00 2001 From: spaulins-usgs Date: Sun, 6 Aug 2023 09:08:52 -0700 Subject: [PATCH 032/101] fix(check): Updated flopy's check to work with cellid -1 values (#1885) * fix(test): should be a complete grid * fix(model grid): Always resync mf2005 grids * Revert "fix(model grid): Always resync mf2005 grids" * fix(grid idomain): grid idomain now defaults to all 1's for MF6 only * fix(notebook): notebook contained out of bounds cellid * fix(modelgrid): always force modelgrid to resync until an idomain array is provided --------- Co-authored-by: scottrp <45947939+scottrp@users.noreply.github.com> --- .docs/Notebooks/mf6_data_tutorial06.py | 9 +++--- autotest/regression/test_mf6.py | 4 ++- flopy/discretization/vertexgrid.py | 3 -- flopy/mf6/data/mfdatalist.py | 32 ++++++++++++++++++++- flopy/mf6/mfmodel.py | 40 ++++++++++++++++++++------ 5 files changed, 71 insertions(+), 17 deletions(-) diff --git a/.docs/Notebooks/mf6_data_tutorial06.py b/.docs/Notebooks/mf6_data_tutorial06.py index 92e7aa48cf..9fd43753a0 100644 --- a/.docs/Notebooks/mf6_data_tutorial06.py +++ b/.docs/Notebooks/mf6_data_tutorial06.py @@ -1,13 +1,14 @@ # --- # jupyter: # jupytext: +# notebook_metadata_filter: metadata # text_representation: # extension: .py # format_name: light -# format_version: "1.5" -# jupytext_version: 1.5.1 +# format_version: '1.5' +# jupytext_version: 1.14.4 # kernelspec: -# display_name: Python 3 +# display_name: Python 3 (ipykernel) # language: python # name: python3 # metadata: @@ -159,7 +160,7 @@ # Note that the cellid information (layer, row, column) is encapsulated in # a tuple. -stress_period_data = [((1, 10, 10), 100.0), ((1, 10, 11), 105.0)] +stress_period_data = [((1, 8, 8), 100.0), ((1, 9, 9), 105.0)] # build chd package chd = flopy.mf6.modflow.mfgwfchd.ModflowGwfchd( gwf, diff --git a/autotest/regression/test_mf6.py b/autotest/regression/test_mf6.py index 6b8b8d207e..5479014475 100644 --- a/autotest/regression/test_mf6.py +++ b/autotest/regression/test_mf6.py @@ -677,6 +677,7 @@ def test_np001(function_tmpdir, example_data_path): sim.delete_output_files() # test error checking + sim.simulation_data.verify_data = False drn_package = ModflowGwfdrn( model, print_input=True, @@ -697,6 +698,7 @@ def test_np001(function_tmpdir, example_data_path): k=100001.0, k33=1e-12, ) + sim.simulation_data.verify_data = True chk = sim.check() summary = ".".join(chk[0].summary_array.desc) assert "drn_1 package: invalid BC index" in summary @@ -3083,7 +3085,7 @@ def test028_create_tests_sfr(function_tmpdir, example_data_path): delc=5000.0, top=top, botm=botm, - idomain=idomain, + #idomain=idomain, filename=f"{model_name}.dis", ) strt = testutils.read_std_array(os.path.join(pth, "strt.txt"), "float") diff --git a/flopy/discretization/vertexgrid.py b/flopy/discretization/vertexgrid.py index ed516e5710..44a9872f57 100644 --- a/flopy/discretization/vertexgrid.py +++ b/flopy/discretization/vertexgrid.py @@ -101,9 +101,6 @@ def __init__( self._vertices = vertices self._cell1d = cell1d self._cell2d = cell2d - self._top = top - self._botm = botm - self._idomain = idomain if botm is None: self._nlay = nlay self._ncpl = ncpl diff --git a/flopy/mf6/data/mfdatalist.py b/flopy/mf6/data/mfdatalist.py index 7cffa30bc0..d622a2500d 100644 --- a/flopy/mf6/data/mfdatalist.py +++ b/flopy/mf6/data/mfdatalist.py @@ -505,6 +505,36 @@ def _check_valid_cellids(self): idomain_val = idomain # cellid should be within the model grid for idx, cellid_part in enumerate(record[index]): + if cellid_part == -1: + # cellid not defined, all values should + # be -1 + match = all( + elem == record[index][0] + for elem in record[index] + ) + if not match: + message = ( + f"Invalid cellid {record[index]}" + ) + ( + type_, + value_, + traceback_, + ) = sys.exc_info() + raise MFDataException( + self.structure.get_model(), + self.structure.get_package(), + self.structure.path, + "storing data", + self.structure.name, + inspect.stack()[0][3], + type_, + value_, + traceback_, + message, + self._simulation_data.debug, + ) + continue if ( model_shape[idx] <= cellid_part or cellid_part < 0 @@ -530,7 +560,7 @@ def _check_valid_cellids(self): ) idomain_val = idomain_val[cellid_part] # cellid should be at an active cell - if idomain_val < 1: + if record[index][0] != -1 and idomain_val < 1: message = ( "Cellid {} is outside of the " "active model grid" diff --git a/flopy/mf6/mfmodel.py b/flopy/mf6/mfmodel.py index b576ab3c50..54ce7f9e8e 100644 --- a/flopy/mf6/mfmodel.py +++ b/flopy/mf6/mfmodel.py @@ -346,7 +346,7 @@ def modelgrid(self): model. """ - + force_resync = False if not self._mg_resync: return self._modelgrid if self.get_grid_type() == DiscretizationType.DIS: @@ -370,12 +370,17 @@ def modelgrid(self): angrot=self._modelgrid.angrot, ) else: + botm = dis.botm.array + idomain = dis.idomain.array + if idomain is None: + force_resync = True + idomain = self._resolve_idomain(idomain, botm) self._modelgrid = StructuredGrid( delc=dis.delc.array, delr=dis.delr.array, top=dis.top.array, - botm=dis.botm.array, - idomain=dis.idomain.array, + botm=botm, + idomain=idomain, lenuni=dis.length_units.array, crs=self._modelgrid.crs, xoff=self._modelgrid.xoffset, @@ -403,12 +408,17 @@ def modelgrid(self): angrot=self._modelgrid.angrot, ) else: + botm = dis.botm.array + idomain = dis.idomain.array + if idomain is None: + force_resync = True + idomain = self._resolve_idomain(idomain, botm) self._modelgrid = VertexGrid( vertices=dis.vertices.array, cell2d=dis.cell2d.array, top=dis.top.array, - botm=dis.botm.array, - idomain=dis.idomain.array, + botm=botm, + idomain=idomain, lenuni=dis.length_units.array, crs=self._modelgrid.crs, xoff=self._modelgrid.xoffset, @@ -498,12 +508,17 @@ def modelgrid(self): angrot=self._modelgrid.angrot, ) else: + botm = dis.botm.array + idomain = dis.idomain.array + if idomain is None: + force_resync = True + idomain = self._resolve_idomain(idomain, botm) self._modelgrid = VertexGrid( vertices=dis.vertices.array, cell1d=dis.cell1d.array, top=dis.top.array, - botm=dis.botm.array, - idomain=dis.idomain.array, + botm=botm, + idomain=idomain, lenuni=dis.length_units.array, crs=self._modelgrid.crs, xoff=self._modelgrid.xoffset, @@ -546,7 +561,7 @@ def modelgrid(self): angrot, self._modelgrid.crs, ) - self._mg_resync = not self._modelgrid.is_complete + self._mg_resync = not self._modelgrid.is_complete or force_resync return self._modelgrid @property @@ -1937,3 +1952,12 @@ def plot(self, SelPackList=None, **kwargs): ) return axes + + @staticmethod + def _resolve_idomain(idomain, botm): + if idomain is None: + if botm is None: + return idomain + else: + return np.ones_like(botm) + return idomain From c25c0b34a173fa9788ccd7a0de16da2d02f96e6c Mon Sep 17 00:00:00 2001 From: w-bonelli Date: Mon, 7 Aug 2023 13:45:18 -0400 Subject: [PATCH 033/101] fix(BaseModel): don't suppress error if exe not found (#1901) * warn at init/load time if exe not found, raise error at model run time --- autotest/test_mbase.py | 17 +++++++++----- autotest/test_modflow.py | 50 ++++++++++++++++++++++++++++++++++++++++ flopy/mbase.py | 25 +++++++++++++++----- 3 files changed, 80 insertions(+), 12 deletions(-) diff --git a/autotest/test_mbase.py b/autotest/test_mbase.py index a2b3f6f619..757909f960 100644 --- a/autotest/test_mbase.py +++ b/autotest/test_mbase.py @@ -18,7 +18,7 @@ def mf6_model_path(example_data_path): @requires_exe("mf6") @pytest.mark.parametrize("use_ext", [True, False]) -def test_resolve_exe_named(function_tmpdir, use_ext): +def test_resolve_exe_by_name(function_tmpdir, use_ext): if use_ext and system() != "Windows": pytest.skip(".exe extensions are Windows-only") @@ -31,7 +31,7 @@ def test_resolve_exe_named(function_tmpdir, use_ext): @requires_exe("mf6") @pytest.mark.parametrize("use_ext", [True, False]) -def test_resolve_exe_full_path(function_tmpdir, use_ext): +def test_resolve_exe_by_abs_path(function_tmpdir, use_ext): if use_ext and system() != "Windows": pytest.skip(".exe extensions are Windows-only") @@ -44,7 +44,8 @@ def test_resolve_exe_full_path(function_tmpdir, use_ext): @requires_exe("mf6") @pytest.mark.parametrize("use_ext", [True, False]) -def test_resolve_exe_rel_path(function_tmpdir, use_ext): +@pytest.mark.parametrize("forgive", [True, False]) +def test_resolve_exe_by_rel_path(function_tmpdir, use_ext, forgive): if use_ext and system() != "Windows": pytest.skip(".exe extensions are Windows-only") @@ -66,9 +67,13 @@ def test_resolve_exe_rel_path(function_tmpdir, use_ext): assert actual.lower() == expected assert which(actual) - # should raise an error if exe DNE - with pytest.raises(FileNotFoundError): - resolve_exe("../bin/mf2005") + # check behavior if exe DNE + with ( + pytest.warns(UserWarning) + if forgive + else pytest.raises(FileNotFoundError) + ): + assert not resolve_exe("../bin/mf2005", forgive) def test_run_model_when_namefile_not_in_model_ws( diff --git a/autotest/test_modflow.py b/autotest/test_modflow.py index 3838ef28cf..af98ea8a95 100644 --- a/autotest/test_modflow.py +++ b/autotest/test_modflow.py @@ -200,6 +200,56 @@ def test_mt_modelgrid(function_tmpdir): assert np.array_equal(swt.modelgrid.idomain, ml.modelgrid.idomain) +@requires_exe("mp7") +def test_exe_selection(example_data_path, function_tmpdir): + model_path = example_data_path / "freyberg" + namfile_path = model_path / "freyberg.nam" + + # no selection defaults to mf2005 + exe_name = "mf2005" + assert Path(Modflow().exe_name).name == exe_name + assert Path(Modflow(exe_name=None).exe_name).name == exe_name + assert ( + Path(Modflow.load(namfile_path, model_ws=model_path).exe_name).name + == exe_name + ) + assert ( + Path( + Modflow.load( + namfile_path, exe_name=None, model_ws=model_path + ).exe_name + ).name + == exe_name + ) + + # user-specified (just for testing - there is no legitimate reason + # to use mp7 with Modflow but Modpath7 derives from BaseModel too) + exe_name = "mp7" + assert Path(Modflow(exe_name=exe_name).exe_name).name == exe_name + assert ( + Path( + Modflow.load( + namfile_path, exe_name=exe_name, model_ws=model_path + ).exe_name + ).name + == exe_name + ) + + # init/load should warn if exe DNE + exe_name = "not_an_exe" + with pytest.warns(UserWarning): + ml = Modflow(exe_name=exe_name) + with pytest.warns(UserWarning): + ml = Modflow.load(namfile_path, exe_name=exe_name, model_ws=model_path) + + # run should error if exe DNE + ml = Modflow.load(namfile_path, exe_name=exe_name, model_ws=model_path) + ml.change_model_ws(function_tmpdir) + ml.write_input() + with pytest.raises(ValueError): + ml.run_model() + + def test_free_format_flag(function_tmpdir): Lx = 100.0 Ly = 100.0 diff --git a/flopy/mbase.py b/flopy/mbase.py index c780ce093a..846bc6fca9 100644 --- a/flopy/mbase.py +++ b/flopy/mbase.py @@ -41,15 +41,21 @@ iprn = -1 -def resolve_exe(exe_name: Union[str, os.PathLike]) -> str: +def resolve_exe( + exe_name: Union[str, os.PathLike], forgive: bool = False +) -> str: """ - Resolves the absolute path of the executable. + Resolves the absolute path of the executable, raising FileNotFoundError if the executable + cannot be found (set forgive to True to return None and warn instead of raising an error). Parameters ---------- exe_name : str or PathLike The executable's name or path. If only the name is provided, the executable must be on the system path. + forgive : bool + If True and executable cannot be found, return None and warn + rather than raising a FileNotFoundError. Defaults to False. Returns ------- @@ -75,6 +81,12 @@ def resolve_exe(exe_name: Union[str, os.PathLike]) -> str: # try tilde-expanded abspath without .exe suffix exe = which(Path(exe_name[:-4]).expanduser().absolute()) if exe is None: + if forgive: + warn( + f"The program {exe_name} does not exist or is not executable." + ) + return None + raise FileNotFoundError( f"The program {exe_name} does not exist or is not executable." ) @@ -376,10 +388,11 @@ def __init__( self._namefile = self.__name + "." + self.namefile_ext self._packagelist = [] self.heading = "" - try: - self.exe_name = resolve_exe(exe_name) - except: - self.exe_name = "mf2005" + self.exe_name = ( + "mf2005" + if exe_name is None + else resolve_exe(exe_name, forgive=True) + ) self._verbose = verbose self.external_path = None self.external_extension = "ref" From 6585a1bddf45167f843a3c15931d10ee3b2b63da Mon Sep 17 00:00:00 2001 From: jdhughes-usgs Date: Tue, 8 Aug 2023 09:02:11 -0500 Subject: [PATCH 034/101] readme: update flopy citation with Hughes and others (2023) (#1908) * update cff file with Hughes et al. (2023) as the preferred reference and Bakker et al (2016) as a reference --- CITATION.cff | 79 +++++++++++++++++++++++++++++++++------------------- README.md | 4 ++- 2 files changed, 54 insertions(+), 29 deletions(-) diff --git a/CITATION.cff b/CITATION.cff index f911a18b68..9fcc959ca7 100644 --- a/CITATION.cff +++ b/CITATION.cff @@ -1,6 +1,6 @@ cff-version: 1.2.0 -message: If you use this software, please cite both the article from preferred-citation - and the software itself. +message: If you use this software, please cite both the article from preferred-citation, + references, and the software itself. type: software title: FloPy version: 3.5.0.dev0 @@ -59,33 +59,56 @@ authors: preferred-citation: type: article authors: - - family-names: Bakker - given-names: Mark - orcid: https://orcid.org/0000-0002-5629-2861 - - family-names: Post - given-names: Vincent - orcid: https://orcid.org/0000-0002-9463-3081 - - family-names: Langevin - given-names: Christian D. - orcid: https://orcid.org/0000-0001-5610-9759 - family-names: Hughes given-names: Joseph D. orcid: https://orcid.org/0000-0003-1311-2354 - - family-names: White - given-names: Jeremy T. - orcid: https://orcid.org/0000-0002-4950-1469 - - family-names: Starn - given-names: Jon Jeffrey - orcid: https://orcid.org/0000-0001-5909-0010 - - family-names: Fienen - given-names: Michael N. - orcid: https://orcid.org/0000-0002-7756-4651 - title: Scripting MODFLOW Model Development Using Python and FloPy - doi: 10.1111/gwat.12413 + - family-names: Langevin + given-names: Christian D. + orcid: https://orcid.org/0000-0001-5610-9759 + - family-names: Paulinski + given-names: Scott R. + orcid: https://orcid.org/0000-0001-6548-8164 + - family-names: Larsen + given-names: Joshua D. + orcid: https://orcid.org/0000-0002-1218-800X + - family-names: Brakenhoff + given-names: David + orcid: https://orcid.org/0000-0002-2993-2202 + title: FloPy Workflows for Creating Structured and Unstructured MODFLOW Models + doi: 10.1111/gwat.13327 journal: Groundwater - volume: 54 - issue: 5 - start: 733 - end: 739 - year: 2016 - month: 9 + year: 2023 + month: 5 +references: + - authors: + - family-names: Bakker + given-names: Mark + orcid: https://orcid.org/0000-0002-5629-2861 + - family-names: Post + given-names: Vincent + orcid: https://orcid.org/0000-0002-9463-3081 + - family-names: Langevin + given-names: Christian D. + orcid: https://orcid.org/0000-0001-5610-9759 + - family-names: Hughes + given-names: Joseph D. + orcid: https://orcid.org/0000-0003-1311-2354 + - family-names: White + given-names: Jeremy T. + orcid: https://orcid.org/0000-0002-4950-1469 + - family-names: Starn + given-names: Jon Jeffrey + orcid: https://orcid.org/0000-0001-5909-0010 + - family-names: Fienen + given-names: Michael N. + orcid: https://orcid.org/0000-0002-7756-4651 + type: article + title: Scripting MODFLOW Model Development Using Python and FloPy + doi: 10.1111/gwat.12413 + journal: Groundwater + volume: 54 + issue: 5 + start: 733 + end: 739 + year: 2016 + month: 9 diff --git a/README.md b/README.md index 616afcf984..e293049cc9 100644 --- a/README.md +++ b/README.md @@ -136,7 +136,9 @@ To install the latest release candidate type: How to Cite ----------------------------------------------- -##### ***Citation for FloPy:*** +##### ***Citations for FloPy:*** + +[Hughes, J.D., Langevin, C.D., Paulinski, S.R., Larsen, J.D. and Brakenhoff, D. (2023), FloPy Workflows for Creating Structured and Unstructured MODFLOW Models. Groundwater. https://doi.org/10.1111/gwat.13327](https://doi.org/10.1111/gwat.13327) [Bakker, Mark, Post, Vincent, Langevin, C. D., Hughes, J. D., White, J. T., Starn, J. J. and Fienen, M. N., 2016, Scripting MODFLOW Model Development Using Python and FloPy: Groundwater, v. 54, p. 733–739, doi:10.1111/gwat.12413.](https://doi.org/10.1111/gwat.12413) From 1cb7594d12a2aa385a567613e9f840de63ba7157 Mon Sep 17 00:00:00 2001 From: w-bonelli Date: Tue, 8 Aug 2023 10:03:11 -0400 Subject: [PATCH 035/101] feat(get-modflow): allow specifying repo owner (#1910) --- autotest/test_get_modflow.py | 21 +++++++++++++++------ docs/get_modflow.md | 4 +++- flopy/utils/get_modflow.py | 32 ++++++++++++++++++++++++-------- 3 files changed, 42 insertions(+), 15 deletions(-) diff --git a/autotest/test_get_modflow.py b/autotest/test_get_modflow.py index f7c3c7f124..3228dce218 100644 --- a/autotest/test_get_modflow.py +++ b/autotest/test_get_modflow.py @@ -27,6 +27,9 @@ / ("Scripts" if system() == "Windows" else "bin"), "home": Path.home() / ".local" / "bin", } +owner_options = [ + "MODFLOW-USGS", +] repo_options = { "executables": [ "crt", @@ -95,14 +98,14 @@ def append_ext(path: str): @pytest.mark.parametrize("per_page", [-1, 0, 101, 1000]) def test_get_releases_bad_page_size(per_page): with pytest.raises(ValueError): - get_releases("executables", per_page=per_page) + get_releases(repo="executables", per_page=per_page) @flaky @requires_github @pytest.mark.parametrize("repo", repo_options.keys()) def test_get_releases(repo): - releases = get_releases(repo) + releases = get_releases(repo=repo) assert "latest" in releases @@ -111,7 +114,7 @@ def test_get_releases(repo): @pytest.mark.parametrize("repo", repo_options.keys()) def test_get_release(repo): tag = "latest" - release = get_release(repo, tag) + release = get_release(repo=repo, tag=tag) assets = release["assets"] expected_assets = ["linux.zip", "mac.zip", "win64.zip"] @@ -245,11 +248,14 @@ def test_script_valid_options(function_tmpdir, downloads_dir): @flaky @requires_github @pytest.mark.slow +@pytest.mark.parametrize("owner", owner_options) @pytest.mark.parametrize("repo", repo_options.keys()) -def test_script(function_tmpdir, repo, downloads_dir): +def test_script(function_tmpdir, owner, repo, downloads_dir): bindir = str(function_tmpdir) stdout, stderr, returncode = run_get_modflow_script( bindir, + "--owner", + owner, "--repo", repo, "--downloads-dir", @@ -267,11 +273,14 @@ def test_script(function_tmpdir, repo, downloads_dir): @flaky @requires_github @pytest.mark.slow +@pytest.mark.parametrize("owner", owner_options) @pytest.mark.parametrize("repo", repo_options.keys()) -def test_python_api(function_tmpdir, repo, downloads_dir): +def test_python_api(function_tmpdir, owner, repo, downloads_dir): bindir = str(function_tmpdir) try: - get_modflow(bindir, repo=repo, downloads_dir=downloads_dir) + get_modflow( + bindir, owner=owner, repo=repo, downloads_dir=downloads_dir + ) except HTTPError as err: if err.code == 403: pytest.skip(f"GitHub {rate_limit_msg}") diff --git a/docs/get_modflow.md b/docs/get_modflow.md index 86a4ccc0b9..5e9cd3bcf1 100644 --- a/docs/get_modflow.md +++ b/docs/get_modflow.md @@ -73,7 +73,7 @@ Other auto-select options are only available if the current user can write files ## Selecting a distribution -By default the distribution from the [executables repository](https://github.com/MODFLOW-USGS/executables) is installed. This includes the MODFLOW 6 binary `mf6` and over 20 other related programs. The utility can also install from the main [MODFLOW 6 repo](https://github.com/MODFLOW-USGS/modflow6) or the [nightly build](https://github.com/MODFLOW-USGS/modflow6-nightly-build) distributions, which contain only: +By default the distribution from the [`MODFLOW-USGS/executables` repository](https://github.com/MODFLOW-USGS/executables) is installed. This includes the MODFLOW 6 binary `mf6` and over 20 other related programs. The utility can also install from the main [MODFLOW 6 repo](https://github.com/MODFLOW-USGS/modflow6) or the [nightly build](https://github.com/MODFLOW-USGS/modflow6-nightly-build) distributions, which contain only: - `mf6` - `mf5to6` @@ -85,3 +85,5 @@ To select a distribution, specify a repository name with the `--repo` command li - `executables` (default) - `modflow6` - `modflow6-nightly-build` + +The repository owner can also be configured with the `--owner` option. This can be useful for installing from unreleased MODFLOW 6 feature branches still in development — the only compatibility requirement is that release assets be named identically to those on the official repositories. diff --git a/flopy/utils/get_modflow.py b/flopy/utils/get_modflow.py index 2c979e163e..54c2a7d12c 100755 --- a/flopy/utils/get_modflow.py +++ b/flopy/utils/get_modflow.py @@ -24,7 +24,8 @@ from typing import Dict, List, Tuple -owner = "MODFLOW-USGS" +default_owner = "MODFLOW-USGS" +default_repo = "executables" # key is the repo name, value is the renamed file prefix for the download renamed_prefix = { "modflow6": "modflow6", @@ -93,8 +94,12 @@ def get_request(url, params={}): return urllib.request.Request(url, headers=headers) -def get_releases(repo, quiet=False, per_page=None) -> List[str]: +def get_releases( + owner=None, repo=None, quiet=False, per_page=None +) -> List[str]: """Get list of available releases.""" + owner = default_owner if owner is None else owner + repo = default_repo if repo is None else repo req_url = f"https://api.github.com/repos/{owner}/{repo}/releases" params = {} @@ -133,8 +138,10 @@ def get_releases(repo, quiet=False, per_page=None) -> List[str]: return avail_releases -def get_release(repo, tag="latest", quiet=False) -> dict: +def get_release(owner=None, repo=None, tag="latest", quiet=False) -> dict: """Get info about a particular release.""" + owner = default_owner if owner is None else owner + repo = default_repo if repo is None else repo api_url = f"https://api.github.com/repos/{owner}/{repo}" req_url = ( f"{api_url}/releases/latest" @@ -166,7 +173,7 @@ def get_release(repo, tag="latest", quiet=False) -> dict: ) from err elif err.code == 404: if releases is None: - releases = get_releases(repo, quiet) + releases = get_releases(owner, repo, quiet) if tag not in releases: raise ValueError( f"Release {tag} not found (choose from {', '.join(releases)})" @@ -287,7 +294,8 @@ def select_bindir(bindir, previous=None, quiet=False, is_cli=False) -> Path: def run_main( bindir, - repo="executables", + owner=default_owner, + repo=default_repo, release_id="latest", ostag=None, subset=None, @@ -304,6 +312,8 @@ def run_main( Writable path to extract executables. Auto-select options start with a colon character. See error message or other documentation for further information on auto-select options. + owner : str, default "MODFLOW-USGS" + Name of GitHub repository owner (user or organization). repo : str, default "executables" Name of GitHub repository. Choose one of "executables" (default), "modflow6", or "modflow6-nightly-build". @@ -393,7 +403,7 @@ def run_main( ) # get the selected release - release = get_release(repo, release_id, quiet) + release = get_release(owner, repo, release_id, quiet) assets = release.get("assets", []) for asset in assets: @@ -683,11 +693,17 @@ def cli_main(): "Option ':system' is '/usr/local/bin'." ) parser.add_argument("bindir", help=bindir_help) + parser.add_argument( + "--owner", + type=str, + default=default_owner, + help=f"GitHub repository owner; default is '{default_owner}'.", + ) parser.add_argument( "--repo", choices=available_repos, - default="executables", - help="Name of GitHub repository; default is 'executables'.", + default=default_repo, + help=f"Name of GitHub repository; default is '{default_repo}'.", ) parser.add_argument( "--release-id", From 72370269e2d6933245c65b3c93beb532016593f3 Mon Sep 17 00:00:00 2001 From: w-bonelli Date: Wed, 9 Aug 2023 18:21:16 -0400 Subject: [PATCH 036/101] refactor(generate_classes): deprecate branch for ref, introduce repo, test commit hashes (#1907) Co-authored-by: Mike Taves --- .github/workflows/commit.yml | 8 +++--- .github/workflows/examples.yml | 4 +-- .github/workflows/regression.yml | 4 +-- .github/workflows/release.yml | 4 +-- autotest/test_generate_classes.py | 32 ++++++++++++++++++------ docs/generate_classes.md | 19 +++++++++++---- docs/make_release.md | 2 +- flopy/mf6/utils/generate_classes.py | 38 ++++++++++++++++++++++++----- 8 files changed, 82 insertions(+), 29 deletions(-) diff --git a/.github/workflows/commit.yml b/.github/workflows/commit.yml index 9864075694..daf46852a5 100644 --- a/.github/workflows/commit.yml +++ b/.github/workflows/commit.yml @@ -209,18 +209,18 @@ jobs: - name: Update FloPy packages if: runner.os != 'Windows' - run: python -c 'import flopy; flopy.mf6.utils.generate_classes(branch="develop", backup=False)' + run: python -c 'import flopy; flopy.mf6.utils.generate_classes(ref="develop", backup=False)' - name: Update FloPy packages if: runner.os == 'Windows' shell: bash -l {0} - run: python -c 'import flopy; flopy.mf6.utils.generate_classes(branch="develop", backup=False)' + run: python -c 'import flopy; flopy.mf6.utils.generate_classes(ref="develop", backup=False)' - name: Run tests if: runner.os != 'Windows' working-directory: ./autotest run: | - pytest -v -m="not example and not regression" -n=auto --cov=flopy --cov-report=xml --durations=0 --keep-failed=.failed + pytest -v -m="not example and not regression" -n=auto --cov=flopy --cov-report=xml --durations=0 --keep-failed=.failed --dist loadfile coverage report env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} @@ -230,7 +230,7 @@ jobs: shell: bash -l {0} working-directory: ./autotest run: | - pytest -v -m="not example and not regression" -n=auto --cov=flopy --cov-report=xml --durations=0 --keep-failed=.failed + pytest -v -m="not example and not regression" -n=auto --cov=flopy --cov-report=xml --durations=0 --keep-failed=.failed --dist loadfile coverage report env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/examples.yml b/.github/workflows/examples.yml index 86eb6154e8..739c408e37 100644 --- a/.github/workflows/examples.yml +++ b/.github/workflows/examples.yml @@ -86,12 +86,12 @@ jobs: - name: Update FloPy packages if: runner.os != 'Windows' - run: python -c 'import flopy; flopy.mf6.utils.generate_classes(branch="develop", backup=False)' + run: python -c 'import flopy; flopy.mf6.utils.generate_classes(ref="develop", backup=False)' - name: Update FloPy packages if: runner.os == 'Windows' shell: bash -l {0} - run: python -c 'import flopy; flopy.mf6.utils.generate_classes(branch="develop", backup=False)' + run: python -c 'import flopy; flopy.mf6.utils.generate_classes(ref="develop", backup=False)' - name: Run example tests if: runner.os != 'Windows' diff --git a/.github/workflows/regression.yml b/.github/workflows/regression.yml index 22981c46f8..daf9c8729a 100644 --- a/.github/workflows/regression.yml +++ b/.github/workflows/regression.yml @@ -70,12 +70,12 @@ jobs: - name: Update FloPy packages if: runner.os != 'Windows' - run: python -c 'import flopy; flopy.mf6.utils.generate_classes(branch="develop", backup=False)' + run: python -c 'import flopy; flopy.mf6.utils.generate_classes(ref="develop", backup=False)' - name: Update FloPy packages if: runner.os == 'Windows' shell: bash -l {0} - run: python -c 'import flopy; flopy.mf6.utils.generate_classes(branch="develop", backup=False)' + run: python -c 'import flopy; flopy.mf6.utils.generate_classes(ref="develop", backup=False)' - name: Run regression tests if: runner.os != 'Windows' diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml index 6ff6af9cf4..9fa496b5b3 100644 --- a/.github/workflows/release.yml +++ b/.github/workflows/release.yml @@ -64,14 +64,14 @@ jobs: echo "version=${ver#"v"}" >> $GITHUB_OUTPUT - name: Update FloPy packages - run: python -c 'import flopy; flopy.mf6.utils.generate_classes(branch="master", backup=False)' + run: python -c 'import flopy; flopy.mf6.utils.generate_classes(ref="master", backup=False)' - name: Lint Python files run: python scripts/pull_request_prepare.py - name: Run tests working-directory: autotest - run: pytest -v -m="not example and not regression" -n=auto --durations=0 --keep-failed=.failed + run: pytest -v -m="not example and not regression" -n=auto --durations=0 --keep-failed=.failed --dist loadfile env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} diff --git a/autotest/test_generate_classes.py b/autotest/test_generate_classes.py index e48c73eb17..90ce076c6d 100644 --- a/autotest/test_generate_classes.py +++ b/autotest/test_generate_classes.py @@ -17,10 +17,14 @@ def nonempty(itr: Iterable): def pytest_generate_tests(metafunc): # defaults + owner = "MODFLOW-USGS" + repo = "modflow6" ref = [ - "MODFLOW-USGS/modflow6/develop", - "MODFLOW-USGS/modflow6/master", - "MODFLOW-USGS/modflow6/6.4.1", + f"{owner}/{repo}/develop", + f"{owner}/{repo}/master", + f"{owner}/{repo}/6.4.1", + f"{owner}/{repo}/4458f9f", + f"{owner}/{repo}/4458f9f7a6244182e6acc2430a6996f9ca2df367", ] # refs provided as env vars override the defaults @@ -51,7 +55,18 @@ def pytest_generate_tests(metafunc): @pytest.mark.mf6 @pytest.mark.slow -def test_generate_classes_from_dfn(virtualenv, project_root_path, ref): +@pytest.mark.regression +def test_generate_classes_from_github_refs( + request, virtualenv, project_root_path, ref, worker_id +): + argv = ( + request.config.workerinput["mainargv"] + if hasattr(request.config, "workerinput") + else [] + ) + if worker_id != "master" and "loadfile" not in argv: + pytest.skip("can't run in parallel") + python = virtualenv.python venv = Path(python).parent print( @@ -81,13 +96,16 @@ def test_generate_classes_from_dfn(virtualenv, project_root_path, ref): # generate classes spl = ref.split("/") owner = spl[0] - branch = spl[2] + repo = spl[1] + ref = spl[2] pprint( virtualenv.run( "python -c 'from flopy.mf6.utils import generate_classes; generate_classes(owner=\"" + owner - + '", branch="' - + branch + + '", repo="' + + repo + + f'", ref="' + + ref + "\", backup=False)'" ) ) diff --git a/docs/generate_classes.md b/docs/generate_classes.md index 79374d24fa..e01cb0c8cb 100644 --- a/docs/generate_classes.md +++ b/docs/generate_classes.md @@ -40,17 +40,26 @@ LIST OF FILES IN C:\Users\***\flopy\flopy\mf6\modflow ... ``` -The `generate_classes()` function has several optional arguments. +The `generate_classes()` function has several optional parameters. ```python # use the develop branch instead -generate_classes(branch="develop") +generate_classes(ref="develop") # use a fork of modflow6 -generate_classes(owner="your-username", branch="your-branch") +generate_classes(owner="your-username", ref="your-branch") + +# maybe your fork has a different name +generate_classes(owner="your-username", repo="your-modflow6", ref="your-branch") # local copy of the repo -generate_classes(dfnpath="../your/dfn/path")) +generate_classes(dfnpath="../your/dfn/path") ``` -By default, a backup is made of FloPy's package classes before rewriting them. To disable backups, use `backup=False`. \ No newline at end of file +Branch names, commit hashes, or tags may be provided to `ref`. + +By default, a backup is made of FloPy's package classes before rewriting them. To disable backups, use `backup=False`. + +## Testing class generation + +Tests for the `generate_classes()` utility are located in `test_generate_classes.py`. The tests depend on [`pytest-virtualenv`](https://pypi.org/project/pytest-virtualenv/) and will be skipped if run in parallel without the `--dist loadfile` option for `pytest-xdist`. \ No newline at end of file diff --git a/docs/make_release.md b/docs/make_release.md index 1825b3d7a3..4e0499916c 100644 --- a/docs/make_release.md +++ b/docs/make_release.md @@ -96,7 +96,7 @@ As described above, making a release manually involves the following steps: - Run `python scripts/update_version.py -v ` to update the version number stored in `version.txt` and `flopy/version.py`. For an approved release use the `--approve` flag. -- Update MODFLOW 6 dfn files in the repository and MODFLOW 6 package classes by running `python -c 'import flopy; flopy.mf6.utils.generate_classes(branch="master", backup=False)'` +- Update MODFLOW 6 dfn files in the repository and MODFLOW 6 package classes by running `python -c 'import flopy; flopy.mf6.utils.generate_classes(ref="master", backup=False)'` - Run `isort` and `black` on the `flopy` module. This can be achieved by running `python scripts/pull_request_prepare.py` from the project root. The commands `isort .` and `black .` can also be run individually instead. diff --git a/flopy/mf6/utils/generate_classes.py b/flopy/mf6/utils/generate_classes.py index ab078e33bb..dd783a0909 100644 --- a/flopy/mf6/utils/generate_classes.py +++ b/flopy/mf6/utils/generate_classes.py @@ -2,6 +2,7 @@ import shutil import tempfile import time +from warnings import warn from .createpackages import create_packages @@ -62,10 +63,15 @@ def download_dfn(owner, branch, new_dfn_pth): mf6url = mf6url.format(owner, branch) print(f" Downloading MODFLOW 6 repository from {mf6url}") with tempfile.TemporaryDirectory() as tmpdirname: - download_and_unzip(mf6url, tmpdirname, verbose=True) - downloaded_dfn_pth = os.path.join(tmpdirname, f"modflow6-{branch}") + dl_path = download_and_unzip(mf6url, tmpdirname, verbose=True) + dl_contents = list(dl_path.glob("modflow6-*")) + proj_path = next(iter(dl_contents), None) + if not proj_path: + raise ValueError( + f"Could not find modflow6 project dir in {dl_path}: found {dl_contents}" + ) downloaded_dfn_pth = os.path.join( - downloaded_dfn_pth, "doc", "mf6io", "mf6ivar", "dfn" + proj_path, "doc", "mf6io", "mf6ivar", "dfn" ) shutil.copytree(downloaded_dfn_pth, new_dfn_pth) @@ -104,7 +110,12 @@ def delete_mf6_classes(): def generate_classes( - owner="MODFLOW-USGS", branch="master", dfnpath=None, backup=True + owner="MODFLOW-USGS", + repo="modflow6", + branch="master", + ref=None, + dfnpath=None, + backup=True, ): """ Generate the MODFLOW 6 flopy classes using definition files from the @@ -116,9 +127,16 @@ def generate_classes( owner : str Owner of the MODFLOW 6 repository to use to update the definition files and generate the MODFLOW 6 classes. Default is MODFLOW-USGS. + repo : str + Name of the MODFLOW 6 repository to use to update the definition. branch : str Branch name of the MODFLOW 6 repository to use to update the definition files and generate the MODFLOW 6 classes. Default is master. + + .. deprecated:: 3.5.0 + Use ref instead. + ref : str + Branch name, tag, or commit hash to use to update the definition. dfnpath : str Path to a definition file folder that will be used to generate the MODFLOW 6 classes. Default is none, which means that the branch @@ -140,11 +158,19 @@ def generate_classes( # user provided dfnpath if dfnpath is None: print( - f" Updating the MODFLOW 6 classes using {owner}/modflow6/{branch}" + f" Updating the MODFLOW 6 classes using {owner}/{repo}/{branch}" ) timestr = time.strftime("%Y%m%d-%H%M%S") new_dfn_pth = os.path.join(flopypth, "mf6", "data", f"dfn_{timestr}") - download_dfn(owner, branch, new_dfn_pth) + + # branch deprecated 3.5.0 + if not ref: + if not branch: + raise ValueError("branch or ref must be provided") + warn("branch is deprecated, use ref instead", DeprecationWarning) + ref = branch + + download_dfn(owner, ref, new_dfn_pth) else: print(f" Updating the MODFLOW 6 classes using {dfnpath}") assert os.path.isdir(dfnpath) From 99f680feb39cb9450e1ce052024bb3da45e264d8 Mon Sep 17 00:00:00 2001 From: Mike Taves Date: Fri, 11 Aug 2023 02:20:36 +1200 Subject: [PATCH 037/101] fix(modelgrid): retain crs data from classic nam files (#1904) --- autotest/test_mfreadnam.py | 70 +++++++++++++++++++++++++++++++- autotest/test_modflow.py | 77 ++++++++++++++++++++++++++++++++---- flopy/discretization/grid.py | 49 ++++++++++++----------- flopy/mbase.py | 6 +++ flopy/modflow/mf.py | 9 ++++- flopy/utils/mfreadnam.py | 16 +++++++- 6 files changed, 190 insertions(+), 37 deletions(-) diff --git a/autotest/test_mfreadnam.py b/autotest/test_mfreadnam.py index 412b2423a4..4be595b724 100644 --- a/autotest/test_mfreadnam.py +++ b/autotest/test_mfreadnam.py @@ -1,7 +1,10 @@ import pytest from autotest.conftest import get_example_data_path -from flopy.utils.mfreadnam import get_entries_from_namefile +from flopy.utils.mfreadnam import ( + attribs_from_namfile_header, + get_entries_from_namefile, +) _example_data_path = get_example_data_path() @@ -39,3 +42,68 @@ def test_get_entries_from_namefile_mf2005(path): entry = entries[0] assert path.parent.name in entry[0] assert entry[1] == package + + +@pytest.mark.parametrize( + "path,expected", + [ + pytest.param( + None, + { + "crs": None, + "rotation": 0.0, + "xll": None, + "xul": None, + "yll": None, + "yul": None, + }, + id="None", + ), + pytest.param( + _example_data_path / "freyberg" / "freyberg.nam", + { + "crs": None, + "rotation": 0.0, + "xll": None, + "xul": None, + "yll": None, + "yul": None, + }, + id="freyberg", + ), + pytest.param( + _example_data_path + / "freyberg_multilayer_transient" + / "freyberg.nam", + { + "crs": "+proj=utm +zone=14 +ellps=WGS84 +datum=WGS84 +units=m +no_defs", + "rotation": 15.0, + "start_datetime": "1/1/2015", + "xll": None, + "xul": 619653.0, + "yll": None, + "yul": 3353277.0, + }, + id="freyberg_multilayer_transient", + ), + pytest.param( + _example_data_path + / "mt3d_test" + / "mfnwt_mt3dusgs" + / "sft_crnkNic" + / "CrnkNic.nam", + { + "crs": "EPSG:26916", + "rotation": 0.0, + "start_datetime": "1-1-1970", + "xll": None, + "xul": 0.0, + "yll": None, + "yul": 15.0, + }, + id="CrnkNic", + ), + ], +) +def test_attribs_from_namfile_header(path, expected): + assert attribs_from_namfile_header(path) == expected diff --git a/autotest/test_modflow.py b/autotest/test_modflow.py index af98ea8a95..a1e0c149fa 100644 --- a/autotest/test_modflow.py +++ b/autotest/test_modflow.py @@ -7,6 +7,7 @@ import numpy as np import pytest from autotest.conftest import get_example_data_path +from modflow_devtools.misc import has_pkg from modflow_devtools.markers import excludes_platform, requires_exe from flopy.discretization import StructuredGrid @@ -29,6 +30,8 @@ from flopy.seawat import Seawat from flopy.utils import Util2d +_example_data_path = get_example_data_path() + @pytest.fixture def model_reference_path(example_data_path) -> Path: @@ -76,6 +79,64 @@ def test_modflow_load(namfile, example_data_path): assert model.model_ws == str(mpath) +@pytest.mark.parametrize( + "path,expected", + [ + pytest.param( + _example_data_path / "freyberg" / "freyberg.nam", + { + "crs": None, + "epsg": None, + "angrot": 0.0, + "xoffset": 0.0, + "yoffset": 0.0, + }, + id="freyberg", + ), + pytest.param( + _example_data_path + / "freyberg_multilayer_transient" + / "freyberg.nam", + { + "proj4": "+proj=utm +zone=14 +ellps=WGS84 +datum=WGS84 +units=m +no_defs", + "angrot": 15.0, + "xoffset": 622241.1904510253, + "yoffset": 3343617.741737109, + }, + id="freyberg_multilayer_transient", + ), + pytest.param( + _example_data_path + / "mt3d_test" + / "mfnwt_mt3dusgs" + / "sft_crnkNic" + / "CrnkNic.nam", + { + "epsg": 26916, + "angrot": 0.0, + "xoffset": 0.0, + "yoffset": 0.0, + }, + id="CrnkNic", + ), + ], +) +def test_modflow_load_modelgrid(path, expected): + """Check modelgrid metadata from NAM file.""" + model = Modflow.load(path.name, model_ws=path.parent, load_only=[]) + modelgrid = model.modelgrid + for key, expected_value in expected.items(): + if key == "proj4" and has_pkg("pyproj"): + # skip since pyproj will usually restructure proj4 attribute + # otherwise continue test without pyproj, as it should be preserved + continue + modelgrid_value = getattr(modelgrid, key) + if isinstance(modelgrid_value, float): + assert modelgrid_value == pytest.approx(expected_value), key + else: + assert modelgrid_value == expected_value, key + + def test_modflow_load_when_nam_dne(): with pytest.raises(OSError): Modflow.load("nonexistent.nam", check=False) @@ -601,12 +662,12 @@ def test_read_usgs_model_reference(function_tmpdir, model_reference_path): def mf2005_model_namfiles(): - path = get_example_data_path() / "mf2005_test" + path = _example_data_path / "mf2005_test" return [str(p) for p in path.glob("*.nam")] def parameters_model_namfiles(): - path = get_example_data_path() / "parameters" + path = _example_data_path / "parameters" skip = ["twrip.nam", "twrip_upw.nam"] # TODO: why do these fail? return [str(p) for p in path.glob("*.nam") if p.name not in skip] @@ -794,15 +855,15 @@ def test_mflist_add_record(): np.testing.assert_array_equal(wel.stress_period_data[1], check1) -__mf2005_test_path = get_example_data_path() / "mf2005_test" -__mf2005_namfiles = [ - Path(__mf2005_test_path) / f - for f in __mf2005_test_path.rglob("*") +_mf2005_test_path = _example_data_path / "mf2005_test" +_mf2005_namfiles = [ + Path(_mf2005_test_path) / f + for f in _mf2005_test_path.rglob("*") if f.suffix == ".nam" ] -@pytest.mark.parametrize("namfile", __mf2005_namfiles) +@pytest.mark.parametrize("namfile", _mf2005_namfiles) def test_checker_on_load(namfile): # load all of the models in the mf2005_test folder # model level checks are performed by default on load() @@ -820,7 +881,7 @@ def test_checker_on_load(namfile): @pytest.mark.parametrize("str_path", [True, False]) def test_manual_check(function_tmpdir, str_path): - namfile_path = __mf2005_namfiles[0] + namfile_path = _mf2005_namfiles[0] summary_path = function_tmpdir / "summary" model = Modflow.load(namfile_path, model_ws=namfile_path.parent) model.change_model_ws(function_tmpdir) diff --git a/flopy/discretization/grid.py b/flopy/discretization/grid.py index 7780213f57..39d8bcfbe9 100644 --- a/flopy/discretization/grid.py +++ b/flopy/discretization/grid.py @@ -6,6 +6,13 @@ import numpy as np +try: + import pyproj + + HAS_PYPROJ = True +except ImportError: + HAS_PYPROJ = False + from ..utils import geometry from ..utils.crs import get_crs from ..utils.gridutil import get_lni @@ -204,15 +211,15 @@ def __init__( raise TypeError(f"unhandled keywords: {kwargs}") if prjfile is not None: self.prjfile = prjfile - try: + if HAS_PYPROJ: self._crs = get_crs(**get_crs_args) - except ImportError: - # provide some support without pyproj by retaining 'epsg' integer - if getattr(self, "_epsg", None) is None: - epsg = _get_epsg_from_crs_or_proj4(crs, self.proj4) - if epsg is not None: - self.epsg = epsg - + elif crs is not None: + # provide some support without pyproj + if isinstance(crs, str) and self.proj4 is None: + self._proj4 = crs + if self.epsg is None: + if epsg := _get_epsg_from_crs_or_proj4(crs, self.proj4): + self._epsg = epsg self._prjfile = prjfile self._xoff = xoff self._yoff = yoff @@ -304,9 +311,9 @@ def crs(self, crs): if crs is None: self._crs = None return - try: + if HAS_PYPROJ: self._crs = get_crs(crs=crs) - except ImportError: + else: warnings.warn( "cannot set 'crs' property without pyproj; " "try setting 'epsg' or 'proj4' instead", @@ -336,11 +343,8 @@ def epsg(self, epsg): raise ValueError("epsg property must be an int or None") self._epsg = epsg # If crs was previously unset, use EPSG code - if self._crs is None and epsg is not None: - try: - self._crs = get_crs(crs=epsg) - except ImportError: - pass + if HAS_PYPROJ and self._crs is None and epsg is not None: + self._crs = get_crs(crs=epsg) @property def proj4(self): @@ -364,11 +368,8 @@ def proj4(self, proj4): raise ValueError("proj4 property must be a str or None") self._proj4 = proj4 # If crs was previously unset, use lossy PROJ string - if self._crs is None and proj4 is not None: - try: - self._crs = get_crs(crs=proj4) - except ImportError: - pass + if HAS_PYPROJ and self._crs is None and proj4 is not None: + self._crs = get_crs(crs=proj4) @property def prj(self): @@ -402,10 +403,10 @@ def prjfile(self, prjfile): raise ValueError("prjfile property must be str, PathLike or None") self._prjfile = prjfile # If crs was previously unset, use .prj file input - if self._crs is None: + if HAS_PYPROJ and self._crs is None: try: self._crs = get_crs(prjfile=prjfile) - except (ImportError, FileNotFoundError): + except FileNotFoundError: pass @property @@ -1012,9 +1013,9 @@ def set_coord_info( self.proj4 = get_crs_args["proj4"] = kwargs.pop("proj4") if kwargs: raise TypeError(f"unhandled keywords: {kwargs}") - try: + if HAS_PYPROJ: new_crs = get_crs(**get_crs_args) - except ImportError: + else: new_crs = None # provide some support without pyproj by retaining 'epsg' integer if getattr(self, "_epsg", None) is None: diff --git a/flopy/mbase.py b/flopy/mbase.py index 846bc6fca9..7e7ca668ae 100644 --- a/flopy/mbase.py +++ b/flopy/mbase.py @@ -423,6 +423,12 @@ def __init__( self._crs = kwargs.pop("crs", None) self._start_datetime = kwargs.pop("start_datetime", "1-1-1970") + if kwargs: + warnings.warn( + f"unhandled keywords: {kwargs}", + category=UserWarning, + ) + # build model discretization objects self._modelgrid = Grid( crs=self._crs, diff --git a/flopy/modflow/mf.py b/flopy/modflow/mf.py index ea1ec9974e..5aacf266d1 100644 --- a/flopy/modflow/mf.py +++ b/flopy/modflow/mf.py @@ -278,9 +278,14 @@ def modelgrid(self): ibound = self.bas6.ibound.array else: ibound = None - + # take the first non-None entry + crs = ( + self._modelgrid.crs + or self._modelgrid.proj4 + or self._modelgrid.epsg + ) common_kwargs = { - "crs": self._modelgrid.crs or self._modelgrid.epsg, + "crs": crs, "xoff": self._modelgrid.xoffset, "yoff": self._modelgrid.yoffset, "angrot": self._modelgrid.angrot, diff --git a/flopy/utils/mfreadnam.py b/flopy/utils/mfreadnam.py index d51854fffe..b96ff5e50a 100644 --- a/flopy/utils/mfreadnam.py +++ b/flopy/utils/mfreadnam.py @@ -205,7 +205,17 @@ def parsenamefile(namfilename, packages, verbose=True): def attribs_from_namfile_header(namefile): - # check for reference info in the nam file header + """Return spatial and temporal reference info from the nam header. + + Parameters + ---------- + namefile : str, PathLike or None + Path to NAM file to read. + + Returns + ------- + dict + """ defaults = { "xll": None, "yll": None, @@ -255,7 +265,6 @@ def attribs_from_namfile_header(namefile): except: print(f" could not parse rotation in {namefile}") elif "proj4_str" in item.lower(): - # deprecated, use "crs" instead try: proj4 = ":".join(item.split(":")[1:]).strip() if proj4.lower() == "none": @@ -277,6 +286,9 @@ def attribs_from_namfile_header(namefile): defaults["start_datetime"] = start_datetime except: print(f" could not parse start in {namefile}") + if "proj4_str" in defaults and defaults["crs"] is None: + # handle deprecated keyword, use "crs" instead + defaults["crs"] = defaults.pop("proj4_str") return defaults From ed3a0cd68ad1acb68fba750f415fa095a19b741a Mon Sep 17 00:00:00 2001 From: Mike Taves Date: Sat, 12 Aug 2023 03:30:44 +1200 Subject: [PATCH 038/101] refactor(expired deprecation): remaining references to SpatialReference (#1914) --- autotest/test_export.py | 10 +++--- autotest/test_modflow.py | 2 +- docs/flopy_method_dependencies.md | 6 ++-- flopy/discretization/grid.py | 8 ----- flopy/export/shapefile_utils.py | 20 +++++++---- flopy/mbase.py | 35 ++++++++++--------- flopy/modflow/mfdis.py | 58 +------------------------------ flopy/modflow/mfsfr2.py | 2 +- flopy/mt3d/mt.py | 6 ---- flopy/pakbase.py | 2 -- flopy/plot/plotutil.py | 2 +- flopy/utils/binaryfile.py | 20 ----------- flopy/utils/reference.py | 5 +-- 13 files changed, 45 insertions(+), 131 deletions(-) diff --git a/autotest/test_export.py b/autotest/test_export.py index 379353be18..51e238f398 100644 --- a/autotest/test_export.py +++ b/autotest/test_export.py @@ -198,7 +198,7 @@ def test_freyberg_export(function_tmpdir, example_data_path): verbose=False, load_only=["DIS", "BAS6", "NWT", "OC", "RCH", "WEL", "DRN", "UPW"], ) - # test export without instantiating an sr + # test export without instantiating a modelgrid m.modelgrid.crs = None shape = function_tmpdir / f"{name}_drn_sparse.shp" m.drn.stress_period_data.export(shape, sparse=True) @@ -211,7 +211,7 @@ def test_freyberg_export(function_tmpdir, example_data_path): m.modelgrid = StructuredGrid( delc=m.dis.delc.array, delr=m.dis.delr.array, crs=3070 ) - # test export with an sr, regardless of whether or not wkt was found + # test export with a modelgrid, regardless of whether or not wkt was found m.drn.stress_period_data.export(shape, sparse=True) for suffix in [".dbf", ".prj", ".shp", ".shx"]: part = shape.with_suffix(suffix) @@ -221,16 +221,16 @@ def test_freyberg_export(function_tmpdir, example_data_path): m.modelgrid = StructuredGrid( delc=m.dis.delc.array, delr=m.dis.delr.array, crs=3070 ) - # verify that attributes have same sr as parent + # verify that attributes have same modelgrid as parent assert m.drn.stress_period_data.mg.crs == m.modelgrid.crs assert m.drn.stress_period_data.mg.xoffset == m.modelgrid.xoffset assert m.drn.stress_period_data.mg.yoffset == m.modelgrid.yoffset assert m.drn.stress_period_data.mg.angrot == m.modelgrid.angrot - # get wkt text was fetched from spatialreference.org + # get wkt text from pyproj wkt = m.modelgrid.crs.to_wkt() - # if wkt text was fetched from spatialreference.org + # if wkt text was fetched from pyproj if wkt is not None: # test default package export shape = function_tmpdir / f"{name}_dis.shp" diff --git a/autotest/test_modflow.py b/autotest/test_modflow.py index a1e0c149fa..d19a255d37 100644 --- a/autotest/test_modflow.py +++ b/autotest/test_modflow.py @@ -560,7 +560,7 @@ def test_namfile_readwrite(function_tmpdir, example_data_path): angrot=30, ) - # test reading and writing of SR information to namfile + # test reading and writing of modelgrid information to namfile m.write_input() m2 = Modflow.load("junk.nam", model_ws=ws) diff --git a/docs/flopy_method_dependencies.md b/docs/flopy_method_dependencies.md index 72841add05..aa1ac03687 100644 --- a/docs/flopy_method_dependencies.md +++ b/docs/flopy_method_dependencies.md @@ -7,10 +7,10 @@ Additional dependencies to use optional FloPy helper methods are listed below. | `.export(*.shp)` | **Pyshp** >= 2.0.0 | | `.export(*.nc)` | **netcdf4** >= 1.1, and **python-dateutil** >= 2.4.0 | | `.export(*.tif)` | **rasterio** | -| `.export(*.asc)` in `flopy.utils.reference` `SpatialReference` class | **scipy.ndimage** | -| `.interpolate()` in `flopy.utils.reference` `SpatialReference` class | **scipy.interpolate** | +| `.export_array(*.asc)` in `flopy.export.utils` | **scipy.ndimage** | +| `.resample_to_grid()` in `flopy.utils.rasters` | **scipy.interpolate** | | `.interpolate()` in `flopy.mf6.utils.reference` `StructuredSpatialReference` class | **scipy.interpolate** | -| `._parse_units_from_proj4()` in `flopy.utils.reference` `SpatialReference` class | **pyproj** | +| `.get_authority_crs()` in `flopy.utils.crs` | **pyproj** >= 2.2.0 | | `.generate_classes()` in `flopy.mf6.utils` | [**modflow-devtools**](https://github.com/MODFLOW-USGS/modflow-devtools) | | `GridIntersect()` in `flopy.utils.gridintersect` | **shapely** | | `GridIntersect().plot_polygon()` in `flopy.utils.gridintersect` | **shapely** and **descartes** | diff --git a/flopy/discretization/grid.py b/flopy/discretization/grid.py index 39d8bcfbe9..0fe69caf55 100644 --- a/flopy/discretization/grid.py +++ b/flopy/discretization/grid.py @@ -1193,14 +1193,6 @@ def _yul_to_yll(self, yul, angrot=None): else: return yul - (np.cos(self.angrot_radians) * yext) - def _set_sr_coord_info(self, sr): - self._xoff = sr.xll - self._yoff = sr.yll - self._angrot = sr.rotation - self._epsg = sr.epsg - self._proj4 = sr.proj4_str - self._require_cache_updates() - def _require_cache_updates(self): for cache_data in self._cache_dict.values(): cache_data.out_of_date = True diff --git a/flopy/export/shapefile_utils.py b/flopy/export/shapefile_utils.py index 87184b74fd..ffe8f527e1 100644 --- a/flopy/export/shapefile_utils.py +++ b/flopy/export/shapefile_utils.py @@ -15,6 +15,7 @@ import numpy as np from ..datbase import DataInterface, DataType +from ..discretization.grid import Grid from ..utils import Util3d, flopy_io, import_optional_dependency from ..utils.crs import get_crs @@ -28,7 +29,8 @@ def write_gridlines_shapefile(filename: Union[str, os.PathLike], mg): ---------- filename : str or PathLike path of the shapefile to write - mg : model grid + mg : flopy.discretization.grid.Grid object + flopy model grid Returns ------- @@ -36,6 +38,10 @@ def write_gridlines_shapefile(filename: Union[str, os.PathLike], mg): """ shapefile = import_optional_dependency("shapefile") + if not isinstance(mg, Grid): + raise ValueError( + f"'mg' must be a flopy Grid subclass instance; found '{type(mg)}'" + ) wr = shapefile.Writer(str(filename), shapeType=shapefile.POLYLINE) wr.field("number", "N", 18, 0) grid_lines = mg.grid_lines @@ -68,7 +74,7 @@ def write_grid_shapefile( ---------- path : str or PathLike shapefile file path - mg : flopy.discretization.Grid object + mg : flopy.discretization.grid.Grid object flopy model grid array_dict : dict dictionary of model input arrays @@ -101,7 +107,11 @@ def write_grid_shapefile( w = shapefile.Writer(str(path), shapeType=shapefile.POLYGON) w.autoBalance = 1 - if mg.grid_type == "structured": + if not isinstance(mg, Grid): + raise ValueError( + f"'mg' must be a flopy Grid subclass instance; found '{type(mg)}'" + ) + elif mg.grid_type == "structured": verts = [ mg.get_cell_vertices(i, j) for i in range(mg.nrow) @@ -112,7 +122,7 @@ def write_grid_shapefile( elif mg.grid_type == "unstructured": verts = [mg.get_cell_vertices(cellid) for cellid in range(mg.nnodes)] else: - raise Exception(f"Grid type {mg.grid_type} not supported.") + raise NotImplementedError(f"Grid type {mg.grid_type} not supported.") # set up the attribute fields and arrays of attributes if mg.grid_type == "structured": @@ -293,8 +303,6 @@ def model_attributes_to_shapefile( pak = ml.get_package(pname) attrs = dir(pak) if pak is not None: - if "sr" in attrs: - attrs.remove("sr") if "start_datetime" in attrs: attrs.remove("start_datetime") for attr in attrs: diff --git a/flopy/mbase.py b/flopy/mbase.py index 7e7ca668ae..6f1cd660c3 100644 --- a/flopy/mbase.py +++ b/flopy/mbase.py @@ -696,19 +696,29 @@ def __getattr__(self, item): Parameters ---------- item : str - 3 character package name (case insensitive) or "sr" to access - the SpatialReference instance of the ModflowDis object + This can be one of: + + * A short package name (case insensitive), e.g., "dis" or "bas6" + returns package object + * "tr" to access the time discretization object, if set + * "start_datetime" to get str describing model start date/time + * Some packages use "nper" or "modelgrid" for corner cases Returns ------- - sr : SpatialReference instance - pp : Package object - Package object of type :class:`flopy.pakbase.Package` + object, str, int or None + Package object of type :class:`flopy.pakbase.Package`, + :class:`flopy.utils.reference.TemporalReference`, str, int or None. + + Raises + ------ + AttributeError + When package or object name cannot be resolved. Note ---- - if self.dis is not None, then the spatial reference instance is updated + if self.dis is not None, then the modelgrid instance is updated using self.dis.delr, self.dis.delc, and self.dis.lenuni before being returned """ @@ -722,6 +732,7 @@ def __getattr__(self, item): return None if item == "nper": + # most subclasses have a nper property, but ModflowAg needs this if self.dis is not None: return self.dis.nper else: @@ -745,6 +756,7 @@ def __getattr__(self, item): if pckg is not None or item in self.mfnam_packages: return pckg if item == "modelgrid": + # most subclasses have a modelgrid property, but not MfUsg return raise AttributeError(item) @@ -1388,17 +1400,6 @@ def __setattr__(self, key, value): self._set_name(value) elif key == "model_ws": self.change_model_ws(value) - elif key == "sr" and value.__class__.__name__ == "SpatialReference": - warnings.warn( - "SpatialReference has been deprecated.", - category=DeprecationWarning, - ) - if self.dis is not None: - self.dis.sr = value - else: - raise Exception( - "cannot set SpatialReference - ModflowDis not found" - ) elif key == "tr": assert isinstance( value, discretization.reference.TemporalReference diff --git a/flopy/modflow/mfdis.py b/flopy/modflow/mfdis.py index 4797d55cac..a146eeb8d0 100644 --- a/flopy/modflow/mfdis.py +++ b/flopy/modflow/mfdis.py @@ -773,62 +773,11 @@ def load(cls, f, model, ext_unit_dict=None, check=True): f = open(filename, "r") # dataset 0 -- header - header = "" while True: line = f.readline() if line[0] != "#": break - header += line.strip() - - header = header.replace("#", "") - xul, yul = None, None - rotation = None - proj4_str = None - start_datetime = "1/1/1970" - dep = False - for item in header.split(","): - if "xul" in item.lower(): - try: - xul = float(item.split(":")[1]) - except: - if model.verbose: - print(f" could not parse xul in {filename}") - dep = True - elif "yul" in item.lower(): - try: - yul = float(item.split(":")[1]) - except: - if model.verbose: - print(f" could not parse yul in {filename}") - dep = True - elif "rotation" in item.lower(): - try: - rotation = float(item.split(":")[1]) - except: - if model.verbose: - print(f" could not parse rotation in {filename}") - dep = True - elif "proj4_str" in item.lower(): - try: - proj4_str = ":".join(item.split(":")[1:]).strip() - except: - if model.verbose: - print(f" could not parse proj4_str in {filename}") - dep = True - elif "start" in item.lower(): - try: - start_datetime = item.split(":")[1].strip() - except: - if model.verbose: - print(f" could not parse start in {filename}") - dep = True - if dep: - warnings.warn( - "SpatialReference information found in DIS header," - "this information is being ignored. " - "SpatialReference info is now stored in the namfile" - "header" - ) + # dataset 1 nlay, nrow, ncol, nper, itmuni, lenuni = line.strip().split()[0:6] nlay = int(nlay) @@ -945,11 +894,6 @@ def load(cls, f, model, ext_unit_dict=None, check=True): steady=steady, itmuni=itmuni, lenuni=lenuni, - xul=xul, - yul=yul, - rotation=rotation, - crs=proj4_str, - start_datetime=start_datetime, unitnumber=unitnumber, filenames=filenames, ) diff --git a/flopy/modflow/mfsfr2.py b/flopy/modflow/mfsfr2.py index c2a1f43400..1efe601125 100644 --- a/flopy/modflow/mfsfr2.py +++ b/flopy/modflow/mfsfr2.py @@ -2520,7 +2520,7 @@ def routing(self): ) else: txt += ( - "No DIS package or SpatialReference object; cannot " + "No DIS package or modelgrid object; cannot " "check reach proximities." ) self._txt_footer(headertxt, txt, "") diff --git a/flopy/mt3d/mt.py b/flopy/mt3d/mt.py index c3d82cdda4..7f0443d73b 100644 --- a/flopy/mt3d/mt.py +++ b/flopy/mt3d/mt.py @@ -332,12 +332,6 @@ def solver_tols(self): return self.gcg.cclose, -999 return None - @property - def sr(self): - if self.mf is not None: - return self.mf.sr - return None - @property def nlay(self): if self.btn: diff --git a/flopy/pakbase.py b/flopy/pakbase.py index 432381820c..21a8a8ce16 100644 --- a/flopy/pakbase.py +++ b/flopy/pakbase.py @@ -655,8 +655,6 @@ def data_list(self): # return [data_object, data_object, ...] dl = [] attrs = dir(self) - if "sr" in attrs: - attrs.remove("sr") if "start_datetime" in attrs: attrs.remove("start_datetime") for attr in attrs: diff --git a/flopy/plot/plotutil.py b/flopy/plot/plotutil.py index 4076e6685f..52001cbd87 100644 --- a/flopy/plot/plotutil.py +++ b/flopy/plot/plotutil.py @@ -1060,7 +1060,7 @@ def _plot_array_helper( ---------- plotarray : np.array object model: fp.modflow.Modflow object - optional if spatial reference is provided + optional if modelgrid is provided modelgrid: fp.discretization.Grid object object that defines the spatial orientation of a modflow grid within flopy. Optional if model object is provided diff --git a/flopy/utils/binaryfile.py b/flopy/utils/binaryfile.py index 07445cbcbe..9bd90b90d0 100644 --- a/flopy/utils/binaryfile.py +++ b/flopy/utils/binaryfile.py @@ -1026,26 +1026,6 @@ def __init__( self.modelgrid = self.dis.parent.modelgrid if "tdis" in kwargs.keys(): self.tdis = kwargs.pop("tdis") - if "sr" in kwargs.keys(): - from ..discretization import StructuredGrid, UnstructuredGrid - - sr = kwargs.pop("sr") - if sr.__class__.__name__ == "SpatialReferenceUnstructured": - self.modelgrid = UnstructuredGrid( - vertices=sr.verts, - iverts=sr.iverts, - xcenters=sr.xc, - ycenters=sr.yc, - ncpl=sr.ncpl, - ) - elif sr.__class__.__name__ == "SpatialReference": - self.modelgrid = StructuredGrid( - delc=sr.delc, - delr=sr.delr, - xoff=sr.xll, - yoff=sr.yll, - angrot=sr.rotation, - ) if "modelgrid" in kwargs.keys(): self.modelgrid = kwargs.pop("modelgrid") if len(kwargs.keys()) > 0: diff --git a/flopy/utils/reference.py b/flopy/utils/reference.py index 0fc85c7e95..ea962b5913 100644 --- a/flopy/utils/reference.py +++ b/flopy/utils/reference.py @@ -1,9 +1,6 @@ -""" -Module spatial referencing for flopy model objects +"""Temporal referencing for flopy model objects.""" -""" __all__ = ["TemporalReference"] -# all other classes and methods in this module are deprecated class TemporalReference: From 0418f857d3825bfe75dd8e843e8c72d28414e1fe Mon Sep 17 00:00:00 2001 From: w-bonelli Date: Tue, 15 Aug 2023 23:16:28 -0400 Subject: [PATCH 039/101] ci: test without optional dependencies (#1918) * add nightly matrix to test with some and no optional dependencies * randomly select 3 optional dependencies to test without per day * check crs-related deprecation warnings only if pyproj available * fix markers in test_grid.py and test_export.py * use strict=True with has_pkg() --- .github/workflows/commit.yml | 35 ++++++++------- .github/workflows/optional.yml | 75 +++++++++++++++++++++++++++++++++ autotest/test_export.py | 12 +++--- autotest/test_grid.py | 26 +++++++++--- autotest/test_gridgen.py | 1 + autotest/test_gridintersect.py | 4 +- autotest/test_mf6.py | 3 +- autotest/test_model_splitter.py | 1 + 8 files changed, 127 insertions(+), 30 deletions(-) create mode 100644 .github/workflows/optional.yml diff --git a/.github/workflows/commit.yml b/.github/workflows/commit.yml index daf46852a5..770eb2460b 100644 --- a/.github/workflows/commit.yml +++ b/.github/workflows/commit.yml @@ -106,6 +106,8 @@ jobs: run: shell: bash timeout-minutes: 10 + env: + PYTHON_VERSION: 3.8 steps: - name: Checkout repo @@ -114,7 +116,7 @@ jobs: - name: Setup Python uses: actions/setup-python@v4 with: - python-version: 3.8 + python-version: ${{ env.PYTHON_VERSION }} cache: 'pip' cache-dependency-path: pyproject.toml @@ -122,15 +124,14 @@ jobs: run: | pip install --upgrade pip pip install . - pip install ".[test, optional]" - + pip install ".[test,optional]" + - name: Install Modflow executables uses: modflowpy/install-modflow-action@v1 - - - name: Run smoke tests - working-directory: ./autotest - run: | - pytest -v -n=auto --smoke --durations=0 --keep-failed=.failed + + - name: Smoke test + working-directory: autotest + run: pytest -v -n=auto --smoke --cov=flopy --cov-report=xml --durations=0 --keep-failed=.failed env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} @@ -138,9 +139,14 @@ jobs: uses: actions/upload-artifact@v3 if: failure() with: - name: failed-smoke-${{ matrix.os }}-${{ matrix.python-version }} - path: | - ./autotest/.failed/** + name: failed-smoke-${{ runner.os }}-${{ env.PYTHON_VERSION }} + path: ./autotest/.failed/** + + - name: Upload coverage + if: github.repository_owner == 'modflowpy' && (github.event_name == 'push' || github.event_name == 'pull_request') + uses: codecov/codecov-action@v3 + with: + files: ./autotest/coverage.xml test: name: Test @@ -220,7 +226,7 @@ jobs: if: runner.os != 'Windows' working-directory: ./autotest run: | - pytest -v -m="not example and not regression" -n=auto --cov=flopy --cov-report=xml --durations=0 --keep-failed=.failed --dist loadfile + pytest -v -m="not example and not regression" -n=auto --cov=flopy --cov-append --cov-report=xml --durations=0 --keep-failed=.failed --dist loadfile coverage report env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} @@ -230,7 +236,7 @@ jobs: shell: bash -l {0} working-directory: ./autotest run: | - pytest -v -m="not example and not regression" -n=auto --cov=flopy --cov-report=xml --durations=0 --keep-failed=.failed --dist loadfile + pytest -v -m="not example and not regression" -n=auto --cov=flopy --cov-append --cov-report=xml --durations=0 --keep-failed=.failed --dist loadfile coverage report env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} @@ -244,8 +250,7 @@ jobs: ./autotest/.failed/** - name: Upload coverage - if: - github.repository_owner == 'modflowpy' && (github.event_name == 'push' || github.event_name == 'pull_request') + if: github.repository_owner == 'modflowpy' && (github.event_name == 'push' || github.event_name == 'pull_request') uses: codecov/codecov-action@v3 with: files: ./autotest/coverage.xml diff --git a/.github/workflows/optional.yml b/.github/workflows/optional.yml new file mode 100644 index 0000000000..094c92e6c4 --- /dev/null +++ b/.github/workflows/optional.yml @@ -0,0 +1,75 @@ +name: FloPy optional dependency testing +on: + schedule: + - cron: '0 8 * * *' # run at 8 AM UTC (12 am PST) +jobs: + test: + name: Test + runs-on: ubuntu-latest + defaults: + run: + shell: bash + timeout-minutes: 10 + env: + PYTHON_VERSION: 3.8 + strategy: + fail-fast: false + matrix: + optdeps: + # - "all optional dependencies" + - "no optional dependencies" + - "some optional dependencies" + + steps: + - name: Checkout repo + uses: actions/checkout@v3 + + - name: Setup Python + uses: actions/setup-python@v4 + with: + python-version: ${{ env.PYTHON_VERSION }} + cache: 'pip' + cache-dependency-path: pyproject.toml + + - name: Install Python dependencies + run: | + pip install --upgrade pip + pip install . + pip install ".[test]" + + # Install optional dependencies according to matrix.optdeps. + # If matrix.optdeps is "some" remove 3 optional dependencies + # selected randomly using the current date as the seed. + if [[ ! "${{ matrix.optdeps }}" == *"no"* ]]; then + pip install ".[optional]" + fi + if [[ "${{ matrix.optdeps }}" == *"some"* ]]; then + deps=$(sed '/optional =/,/]/!d' pyproject.toml | sed -e '1d;$d' -e 's/\"//g' -e 's/,//g' | tr -d ' ' | cut -f 1 -d ';') + rmvd=$(echo $deps | tr ' ' '\n' | shuf --random-source <(yes date +%d.%m.%y) | head -n 3) + echo "Removing optional dependencies: $rmvd" >> removed_dependencies.txt + cat removed_dependencies.txt + pip uninstall --yes $rmvd + fi + + - name: Upload removed dependencies log + uses: actions/upload-artifact@v3 + with: + name: smoke-test-removed-dependencies + path: ./removed_dependencies.txt + + - name: Install Modflow executables + uses: modflowpy/install-modflow-action@v1 + + - name: Smoke test (${{ matrix.optdeps }}) + working-directory: autotest + run: pytest -v -n=auto --smoke --cov=flopy --cov-report=xml --durations=0 --keep-failed=.failed + env: + GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} + + - name: Upload failed test outputs + uses: actions/upload-artifact@v3 + if: failure() + with: + name: failed-smoke-${{ runner.os }}-${{ env.PYTHON_VERSION }} + path: ./autotest/.failed/** + \ No newline at end of file diff --git a/autotest/test_export.py b/autotest/test_export.py index 51e238f398..afe9bdf39a 100644 --- a/autotest/test_export.py +++ b/autotest/test_export.py @@ -52,6 +52,11 @@ from flopy.utils.geometry import Polygon +HAS_PYPROJ = has_pkg("pyproj", strict=True) +if HAS_PYPROJ: + import pyproj + + def namfiles() -> List[Path]: mf2005_path = get_example_data_path() / "mf2005_test" return list(mf2005_path.rglob("*.nam")) @@ -330,10 +335,7 @@ def test_write_gridlines_shapefile(function_tmpdir): for suffix in [".dbf", ".shp", ".shx"]: assert outshp.with_suffix(suffix).exists() - if has_pkg("pyproj"): - assert outshp.with_suffix(".prj").exists() - else: - assert not outshp.with_suffix(".prj").exists() + assert outshp.with_suffix(".prj").exists() == HAS_PYPROJ with shapefile.Reader(str(outshp)) as sf: assert sf.shapeType == shapefile.POLYLINE @@ -1378,7 +1380,7 @@ def test_vtk_unstructured(function_tmpdir, example_data_path): assert np.allclose(np.ravel(top), top2), "Field data not properly written" -@requires_pkg("pyvista") +@requires_pkg("vtk", "pyvista") def test_vtk_to_pyvista(function_tmpdir, example_data_path): from autotest.test_mp7_cases import Mp7Cases diff --git a/autotest/test_grid.py b/autotest/test_grid.py index c3dafe5557..214018777e 100644 --- a/autotest/test_grid.py +++ b/autotest/test_grid.py @@ -1,6 +1,7 @@ import os import re import warnings +from contextlib import nullcontext from warnings import warn import matplotlib @@ -22,7 +23,7 @@ from flopy.utils.triangle import Triangle from flopy.utils.voronoi import VoronoiGrid -HAS_PYPROJ = has_pkg("pyproj") +HAS_PYPROJ = has_pkg("pyproj", strict=True) if HAS_PYPROJ: import pyproj @@ -594,9 +595,14 @@ def do_checks(g): do_checks(UnstructuredGrid(**d, crs=crs)) do_checks(VertexGrid(vertices=d["vertices"], crs=crs)) + # only check deprecations if pyproj is available + pyproj_avail_context = ( + pytest.deprecated_call() if HAS_PYPROJ else nullcontext() + ) + # test deprecated 'epsg' parameter if isinstance(crs, int): - with pytest.deprecated_call(): + with pyproj_avail_context: do_checks(StructuredGrid(delr=delr, delc=delc, epsg=crs)) if HAS_PYPROJ and crs == 26916: @@ -612,14 +618,14 @@ def do_checks(g): do_checks(StructuredGrid(delr=delr, delc=delc, prjfile=prjfile)) # test deprecated 'prj' parameter - with pytest.deprecated_call(): + with pyproj_avail_context: do_checks(StructuredGrid(delr=delr, delc=delc, prj=prjfile)) # test deprecated 'proj4' parameter with warnings.catch_warnings(): warnings.simplefilter("ignore") # pyproj warning about conversion proj4 = crs_obj.to_proj4() - with pytest.deprecated_call(): + with pyproj_avail_context: do_checks(StructuredGrid(delr=delr, delc=delc, proj4=proj4)) @@ -675,9 +681,14 @@ def do_checks(g, *, exp_srs=expected_srs, exp_epsg=expected_epsg): sg.set_coord_info(crs=26915, merge_coord_info=False) do_checks(sg, exp_srs="EPSG:26915", exp_epsg=26915) + # only check deprecations if pyproj is available + pyproj_avail_context = ( + pytest.deprecated_call() if HAS_PYPROJ else nullcontext() + ) + # test deprecated 'epsg' parameter if isinstance(crs, int): - with pytest.deprecated_call(): + with pyproj_avail_context: sg.set_coord_info(epsg=crs) do_checks(sg) @@ -827,8 +838,9 @@ def test_grid_crs_exceptions(): # test non-existing file not_a_file = "not-a-file" - with pytest.raises(FileNotFoundError): - StructuredGrid(delr=delr, delc=delc, prjfile=not_a_file) + if HAS_PYPROJ: + with pytest.raises(FileNotFoundError): + StructuredGrid(delr=delr, delc=delc, prjfile=not_a_file) # note "sg.prjfile = not_a_file" intentionally does not raise anything # test unhandled keyword diff --git a/autotest/test_gridgen.py b/autotest/test_gridgen.py index 2453e473f1..0703a274eb 100644 --- a/autotest/test_gridgen.py +++ b/autotest/test_gridgen.py @@ -14,6 +14,7 @@ from flopy.utils.gridgen import Gridgen +@requires_exe("gridgen") def test_ctor_accepts_path_or_string(function_tmpdir): grid = GridCases().structured_small() diff --git a/autotest/test_gridintersect.py b/autotest/test_gridintersect.py index 82c1f93856..498320e53b 100644 --- a/autotest/test_gridintersect.py +++ b/autotest/test_gridintersect.py @@ -14,7 +14,7 @@ from flopy.utils.gridintersect import GridIntersect from flopy.utils.triangle import Triangle -if has_pkg("shapely"): +if has_pkg("shapely", strict=True): from shapely.geometry import ( LineString, MultiLineString, @@ -1221,7 +1221,7 @@ def test_polygon_offset_rot_structured_grid_shapely(rtree): # %% test rasters -@requires_pkg("rasterstats", "scipy") +@requires_pkg("rasterstats", "scipy", "shapely") def test_rasters(example_data_path): ws = example_data_path / "options" raster_name = "dem.img" diff --git a/autotest/test_mf6.py b/autotest/test_mf6.py index 3a35108f2c..f9b475a4a7 100644 --- a/autotest/test_mf6.py +++ b/autotest/test_mf6.py @@ -5,7 +5,7 @@ import numpy as np import pytest -from modflow_devtools.markers import requires_exe +from modflow_devtools.markers import requires_exe, requires_pkg from modflow_devtools.misc import set_dir import flopy @@ -637,6 +637,7 @@ def test_binary_write(function_tmpdir, layered): @requires_exe("mf6") +@requires_pkg("shapely", "scipy") @pytest.mark.parametrize("layered", [True, False]) def test_vor_binary_write(function_tmpdir, layered): # build voronoi grid diff --git a/autotest/test_model_splitter.py b/autotest/test_model_splitter.py index 08cbb42c5d..8e8dfbe24b 100644 --- a/autotest/test_model_splitter.py +++ b/autotest/test_model_splitter.py @@ -200,6 +200,7 @@ def test_metis_splitting_with_lak_sfr(function_tmpdir): @requires_exe("mf6") +@requires_pkg("pymetis") def test_save_load_node_mapping(function_tmpdir): sim_path = get_example_data_path() / "mf6-freyberg" new_sim_path = function_tmpdir / "mf6-freyberg/split_model" From 3cef778bdf979ac249d84d106ae7db01d5d8e391 Mon Sep 17 00:00:00 2001 From: spaulins-usgs Date: Wed, 16 Aug 2023 07:28:34 -0700 Subject: [PATCH 040/101] fix(keyword data): optional keywords (#1920) Flopy allows some keywords to be excluded from recarrays, like "FILEIN". This fix makes sure that optional keywords, like "MIXED" are not excluded from recarrays. Since they are optional their presence adds information to the recarray and they therefore can not be excluded. Co-authored-by: scottrp <45947939+scottrp@users.noreply.github.com> --- autotest/test_mf6.py | 14 ++++++++++++++ flopy/mf6/data/mfdatastorage.py | 1 + 2 files changed, 15 insertions(+) diff --git a/autotest/test_mf6.py b/autotest/test_mf6.py index f9b475a4a7..7bead499c9 100644 --- a/autotest/test_mf6.py +++ b/autotest/test_mf6.py @@ -2187,6 +2187,20 @@ def test_multi_model(function_tmpdir): assert rec_array[0][3] == model_names[1] assert rec_array[1][1] == "transport.ims" assert rec_array[1][2] == model_names[2] + # test ssm fileinput + gwt2 = sim2.get_model("gwt_model_1") + ssm2 = gwt2.get_package("ssm") + fileinput = [ + ("RCH-1", f"gwt_model_1.rch1.spc"), + ("RCH-2", f"gwt_model_1.rch2.spc"), + ("RCH-3", f"gwt_model_1.rch3.spc", "MIXED"), + ("RCH-4", f"gwt_model_1.rch4.spc"), + ] + ssm2.fileinput = fileinput + fi_out = ssm2.fileinput.get_data() + assert fi_out[2][1] == "gwt_model_1.rch3.spc" + assert fi_out[1][2] is None + assert fi_out[2][2] == "MIXED" # create a new gwt model sourcerecarray = [("WEL-1", "AUX", "CONCENTRATION")] diff --git a/flopy/mf6/data/mfdatastorage.py b/flopy/mf6/data/mfdatastorage.py index 491c2e20cf..c74fd31d62 100644 --- a/flopy/mf6/data/mfdatastorage.py +++ b/flopy/mf6/data/mfdatastorage.py @@ -2857,6 +2857,7 @@ def build_type_list( if ( data_item.type != DatumType.keyword or data_set.block_variable + or data_item.optional ): initial_keyword = False shape_rule = None From e02030876c20d6bf588134bc0ab0cc1c75c0c46e Mon Sep 17 00:00:00 2001 From: Mike Taves Date: Thu, 17 Aug 2023 03:21:02 +1200 Subject: [PATCH 041/101] feat(generate_classes): create a command-line interface (#1912) * add CLI using -m switch (not project.scripts like get-modflow) * fix default param values to branch=None, ref="master" * update generate_classes.md and make_release.md docs * update usages in CI workflows --- .github/workflows/commit.yml | 4 +- .github/workflows/examples.yml | 4 +- .github/workflows/regression.yml | 4 +- .github/workflows/release.yml | 2 +- autotest/test_generate_classes.py | 9 +-- docs/generate_classes.md | 62 ++++++++++++++----- docs/make_release.md | 2 +- flopy/mf6/utils/generate_classes.py | 92 +++++++++++++++++++++++------ 8 files changed, 130 insertions(+), 49 deletions(-) diff --git a/.github/workflows/commit.yml b/.github/workflows/commit.yml index 770eb2460b..79decdc901 100644 --- a/.github/workflows/commit.yml +++ b/.github/workflows/commit.yml @@ -215,12 +215,12 @@ jobs: - name: Update FloPy packages if: runner.os != 'Windows' - run: python -c 'import flopy; flopy.mf6.utils.generate_classes(ref="develop", backup=False)' + run: python -m flopy.mf6.utils.generate_classes --ref develop --no-backup - name: Update FloPy packages if: runner.os == 'Windows' shell: bash -l {0} - run: python -c 'import flopy; flopy.mf6.utils.generate_classes(ref="develop", backup=False)' + run: python -m flopy.mf6.utils.generate_classes --ref develop --no-backup - name: Run tests if: runner.os != 'Windows' diff --git a/.github/workflows/examples.yml b/.github/workflows/examples.yml index 739c408e37..62df8cf843 100644 --- a/.github/workflows/examples.yml +++ b/.github/workflows/examples.yml @@ -86,12 +86,12 @@ jobs: - name: Update FloPy packages if: runner.os != 'Windows' - run: python -c 'import flopy; flopy.mf6.utils.generate_classes(ref="develop", backup=False)' + run: python -m flopy.mf6.utils.generate_classes --ref develop --no-backup - name: Update FloPy packages if: runner.os == 'Windows' shell: bash -l {0} - run: python -c 'import flopy; flopy.mf6.utils.generate_classes(ref="develop", backup=False)' + run: python -m flopy.mf6.utils.generate_classes --ref develop --no-backup - name: Run example tests if: runner.os != 'Windows' diff --git a/.github/workflows/regression.yml b/.github/workflows/regression.yml index daf9c8729a..706966521a 100644 --- a/.github/workflows/regression.yml +++ b/.github/workflows/regression.yml @@ -70,12 +70,12 @@ jobs: - name: Update FloPy packages if: runner.os != 'Windows' - run: python -c 'import flopy; flopy.mf6.utils.generate_classes(ref="develop", backup=False)' + run: python -m flopy.mf6.utils.generate_classes --ref develop --no-backup - name: Update FloPy packages if: runner.os == 'Windows' shell: bash -l {0} - run: python -c 'import flopy; flopy.mf6.utils.generate_classes(ref="develop", backup=False)' + run: python -m flopy.mf6.utils.generate_classes --ref develop --no-backup - name: Run regression tests if: runner.os != 'Windows' diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml index 9fa496b5b3..dbfddeeea0 100644 --- a/.github/workflows/release.yml +++ b/.github/workflows/release.yml @@ -64,7 +64,7 @@ jobs: echo "version=${ver#"v"}" >> $GITHUB_OUTPUT - name: Update FloPy packages - run: python -c 'import flopy; flopy.mf6.utils.generate_classes(ref="master", backup=False)' + run: python -m flopy.mf6.utils.generate_classes --ref master --no-backup - name: Lint Python files run: python scripts/pull_request_prepare.py diff --git a/autotest/test_generate_classes.py b/autotest/test_generate_classes.py index 90ce076c6d..fdd6b8ae80 100644 --- a/autotest/test_generate_classes.py +++ b/autotest/test_generate_classes.py @@ -100,13 +100,8 @@ def test_generate_classes_from_github_refs( ref = spl[2] pprint( virtualenv.run( - "python -c 'from flopy.mf6.utils import generate_classes; generate_classes(owner=\"" - + owner - + '", repo="' - + repo - + f'", ref="' - + ref - + "\", backup=False)'" + "python -m flopy.mf6.utils.generate_classes " + f"--owner {owner} --repo {repo} ref {ref} --no-backup" ) ) diff --git a/docs/generate_classes.md b/docs/generate_classes.md index e01cb0c8cb..70ffa40851 100644 --- a/docs/generate_classes.md +++ b/docs/generate_classes.md @@ -4,9 +4,8 @@ MODFLOW 6 input continues to evolve as new models, packages, and options are dev The FloPy classes for MODFLOW 6 are largely generated by a utility which converts DFN files in a modflow6 repository on GitHub or on the local machine into Python source files in your local FloPy install. For instance (output much abbreviated): -```python ->>> from flopy.mf6.utils import generate_classes ->>> generate_classes() +```bash +$ python -m flopy.mf6.utils.generate_classes @@ -39,27 +38,58 @@ LIST OF FILES IN C:\Users\***\flopy\flopy\mf6\modflow 2 - mfgwf.py ... ``` - -The `generate_classes()` function has several optional parameters. - +Similar functionality is available within Python, e.g.: ```python -# use the develop branch instead -generate_classes(ref="develop") +>>> from flopy.mf6.utils import generate_classes +>>> generate_classes() +``` -# use a fork of modflow6 -generate_classes(owner="your-username", ref="your-branch") +The `generate_classes()` function has several optional parameters. -# maybe your fork has a different name -generate_classes(owner="your-username", repo="your-modflow6", ref="your-branch") +```bash +$ python -m flopy.mf6.utils.generate_classes -h +usage: generate_classes.py [-h] [--owner OWNER] [--repo REPO] [--ref REF] + [--dfnpath DFNPATH] [--no-backup] + +Generate the MODFLOW 6 flopy classes using definition files from the MODFLOW 6 +GitHub repository or a set of definition files in a folder provided by the +user. + +options: + -h, --help show this help message and exit + --owner OWNER GitHub repository owner; default is 'MODFLOW-USGS'. + --repo REPO Name of GitHub repository; default is 'modflow6'. + --ref REF Branch name, tag, or commit hash to use to update the + definition; default is 'master'. + --dfnpath DFNPATH Path to a definition file folder that will be used to + generate the MODFLOW 6 classes. + --no-backup Set to disable backup. Default behavior is to keep a + backup of the definition files in dfn_backup with a date + and timestamp from when the definition files were + replaced. +``` -# local copy of the repo -generate_classes(dfnpath="../your/dfn/path") +For example, use the develop branch instead: +```bash +$ python -m flopy.mf6.utils.generate_classes --ref develop +``` +use a fork of modflow6: +```bash +$ python -m flopy.mf6.utils.generate_classes --owner your-username --ref your-branch +``` +maybe your fork has a different name: +```bash +$ python -m flopy.mf6.utils.generate_classes --owner your-username --repo your-modflow6 --ref your-branch +``` +local copy of the repo: +```bash +$ python -m flopy.mf6.utils.generate_classes --dfnpath ../your/dfn/path ``` Branch names, commit hashes, or tags may be provided to `ref`. -By default, a backup is made of FloPy's package classes before rewriting them. To disable backups, use `backup=False`. +By default, a backup is made of FloPy's package classes before rewriting them. To disable backups, use `--no-backup` from command-line, or `backup=False` with the Python function. ## Testing class generation -Tests for the `generate_classes()` utility are located in `test_generate_classes.py`. The tests depend on [`pytest-virtualenv`](https://pypi.org/project/pytest-virtualenv/) and will be skipped if run in parallel without the `--dist loadfile` option for `pytest-xdist`. \ No newline at end of file +Tests for the `generate_classes()` utility are located in `test_generate_classes.py`. The tests depend on [`pytest-virtualenv`](https://pypi.org/project/pytest-virtualenv/) and will be skipped if run in parallel without the `--dist loadfile` option for `pytest-xdist`. diff --git a/docs/make_release.md b/docs/make_release.md index 4e0499916c..0da0f17038 100644 --- a/docs/make_release.md +++ b/docs/make_release.md @@ -96,7 +96,7 @@ As described above, making a release manually involves the following steps: - Run `python scripts/update_version.py -v ` to update the version number stored in `version.txt` and `flopy/version.py`. For an approved release use the `--approve` flag. -- Update MODFLOW 6 dfn files in the repository and MODFLOW 6 package classes by running `python -c 'import flopy; flopy.mf6.utils.generate_classes(ref="master", backup=False)'` +- Update MODFLOW 6 dfn files in the repository and MODFLOW 6 package classes by running `python -m flopy.mf6.utils.generate_classes --ref master --no-backup` - Run `isort` and `black` on the `flopy` module. This can be achieved by running `python scripts/pull_request_prepare.py` from the project root. The commands `isort .` and `black .` can also be run individually instead. diff --git a/flopy/mf6/utils/generate_classes.py b/flopy/mf6/utils/generate_classes.py index dd783a0909..602ed8fbb5 100644 --- a/flopy/mf6/utils/generate_classes.py +++ b/flopy/mf6/utils/generate_classes.py @@ -11,6 +11,9 @@ flopypth = os.path.abspath(flopypth) protected_dfns = ["flopy.dfn"] +default_owner = "MODFLOW-USGS" +default_repo = "modflow6" + def delete_files(files, pth, allow_failure=False, exclude=None): if exclude is None: @@ -51,13 +54,12 @@ def list_files(pth, exts=["py"]): def download_dfn(owner, branch, new_dfn_pth): try: from modflow_devtools.download import download_and_unzip - except: - msg = ( - "Error. The modflow-devtools package must be installed in order to " - "generate the MODFLOW 6 classes. modflow-devtools can be installed using " - "pip install modflow-devtools. Stopping." + except ImportError: + raise ImportError( + "The modflow-devtools package must be installed in order to " + "generate the MODFLOW 6 classes. This can be with:\n" + " pip install modflow-devtools" ) - print(msg) mf6url = "https://github.com/{}/modflow6/archive/{}.zip" mf6url = mf6url.format(owner, branch) @@ -110,10 +112,10 @@ def delete_mf6_classes(): def generate_classes( - owner="MODFLOW-USGS", - repo="modflow6", - branch="master", - ref=None, + owner=default_owner, + repo=default_repo, + branch=None, + ref="master", dfnpath=None, backup=True, ): @@ -124,27 +126,27 @@ def generate_classes( Parameters ---------- - owner : str + owner : str, default "MODFLOW-USGS" Owner of the MODFLOW 6 repository to use to update the definition - files and generate the MODFLOW 6 classes. Default is MODFLOW-USGS. - repo : str + files and generate the MODFLOW 6 classes. + repo : str, default "modflow6" Name of the MODFLOW 6 repository to use to update the definition. - branch : str + branch : str, optional Branch name of the MODFLOW 6 repository to use to update the - definition files and generate the MODFLOW 6 classes. Default is master. + definition files and generate the MODFLOW 6 classes. .. deprecated:: 3.5.0 Use ref instead. - ref : str + ref : str, default "master" Branch name, tag, or commit hash to use to update the definition. dfnpath : str Path to a definition file folder that will be used to generate the MODFLOW 6 classes. Default is none, which means that the branch will be used instead. dfnpath will take precedence over branch if dfnpath is specified. - backup : bool + backup : bool, default True Keep a backup of the definition files in dfn_backup with a date and - time stamp from when the definition files were replaced. + timestamp from when the definition files were replaced. """ @@ -191,3 +193,57 @@ def generate_classes( print(" Create mf6 classes using the downloaded definition files.") create_packages() list_files(os.path.join(flopypth, "mf6", "modflow")) + + +def cli_main(): + """Command-line interface for generate_classes().""" + import argparse + import sys + + parser = argparse.ArgumentParser( + description=generate_classes.__doc__.split("\n\n")[0], + ) + + parser.add_argument( + "--owner", + type=str, + default=default_owner, + help=f"GitHub repository owner; default is '{default_owner}'.", + ) + parser.add_argument( + "--repo", + default=default_repo, + help=f"Name of GitHub repository; default is '{default_repo}'.", + ) + parser.add_argument( + "--ref", + default="master", + help="Branch name, tag, or commit hash to use to update the " + "definition; default is 'master'.", + ) + parser.add_argument( + "--dfnpath", + help="Path to a definition file folder that will be used to generate " + "the MODFLOW 6 classes.", + ) + parser.add_argument( + "--no-backup", + action="store_true", + help="Set to disable backup. " + "Default behavior is to keep a backup of the definition files in " + "dfn_backup with a date and timestamp from when the definition " + "files were replaced.", + ) + + args = vars(parser.parse_args()) + # Handle flipped logic + args["backup"] = not args.pop("no_backup") + try: + generate_classes(**args) + except (EOFError, KeyboardInterrupt): + sys.exit(f" cancelling '{sys.argv[0]}'") + + +if __name__ == "__main__": + """Run command-line with: python -m flopy.mf6.utils.generate_classes""" + cli_main() From f2064d8ce5d8bca6340c86b4947070031104104a Mon Sep 17 00:00:00 2001 From: Mike Taves Date: Tue, 22 Aug 2023 02:07:51 +1200 Subject: [PATCH 042/101] fix(GridIntersect): combine list of geometries using unary_union (#1923) --- flopy/utils/gridintersect.py | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/flopy/utils/gridintersect.py b/flopy/utils/gridintersect.py index 73c44dc989..465ca23229 100644 --- a/flopy/utils/gridintersect.py +++ b/flopy/utils/gridintersect.py @@ -18,6 +18,10 @@ if Version(shapely.__version__) < Version("1.8"): warnings.warn("GridIntersect requires shapely>=1.8.") shapely = None + if SHAPELY_GE_20: + from shapely import unary_union + else: + from shapely.ops import unary_union else: SHAPELY_GE_20 = False @@ -1487,10 +1491,7 @@ def _intersect_linestring_structured( tempverts.append(vertices[i]) ishp = ixshapes[i] if isinstance(ishp, list): - if len(ishp) > 1: - ishp = shapely_geo.MultiLineString(ishp) - else: - ishp = ishp[0] + ishp = unary_union(ishp) tempshapes.append(ishp) nodelist = tempnodes lengths = templengths From 3a1ae0bba3171561f0b4b78a2bb892fdd0c55e90 Mon Sep 17 00:00:00 2001 From: Joshua Larsen Date: Mon, 21 Aug 2023 09:30:51 -0700 Subject: [PATCH 043/101] updates(Mf6Splitter): control record and additional splitting checks (#1919) * update(_set_neighbors): check for closed iverts and remove closing ivert * updates(Mf6Splitter): control record and additional splitting checks * added checks for hfb and inactive models * updated remap_array and remap_mflist to maintain constants, external ascii files, and binary external files * linting * fix idomain check * fix remapping for idomain * fix idomain issue --- autotest/test_model_splitter.py | 128 ++++++++++++++++++++ flopy/mf6/utils/model_splitter.py | 188 +++++++++++++++++++++++++++++- 2 files changed, 310 insertions(+), 6 deletions(-) diff --git a/autotest/test_model_splitter.py b/autotest/test_model_splitter.py index 8e8dfbe24b..921c29dc4a 100644 --- a/autotest/test_model_splitter.py +++ b/autotest/test_model_splitter.py @@ -1,4 +1,5 @@ import numpy as np +import flopy import pytest from autotest.conftest import get_example_data_path from modflow_devtools.markers import requires_exe, requires_pkg @@ -246,3 +247,130 @@ def test_save_load_node_mapping(function_tmpdir): new_heads = mfsplit2.reconstruct_array(array_dict) err_msg = "Heads from original and split models do not match" np.testing.assert_allclose(new_heads, original_heads, err_msg=err_msg) + + +def test_control_records(function_tmpdir): + nrow = 10 + ncol = 10 + nper = 3 + + sim = flopy.mf6.MFSimulation() + ims = flopy.mf6.ModflowIms(sim, complexity="SIMPLE") + + tdis = flopy.mf6.ModflowTdis( + sim, + nper=nper, + perioddata=( + (1., 1, 1.), + (1., 1, 1.), + (1., 1, 1.) + ) + ) + + gwf = flopy.mf6.ModflowGwf(sim, save_flows=True) + + botm2 = np.ones((nrow, ncol)) * 20 + dis = flopy.mf6.ModflowGwfdis( + gwf, + nlay=2, + nrow=nrow, + ncol=ncol, + delr=1, + delc=1, + top=35, + botm=[30, botm2], + idomain=1 + ) + + ic = flopy.mf6.ModflowGwfic(gwf, strt=32) + npf = flopy.mf6.ModflowGwfnpf( + gwf, + k=[ + 1., + { + "data": np.ones((10, 10)) * 0.75, + "filename": "k.l2.txt", + "iprn": 1, + "factor": 1 + } + ], + k33=[ + np.ones((nrow, ncol)), + { + "data": np.ones((nrow, ncol)) * 0.5, + "filename": "k33.l2.bin", + "iprn": 1, + "factor": 1, + "binary": True + } + ] + + ) + + wel_rec = [((0, 4, 5), -10), ] + + spd = { + 0: wel_rec, + 1: { + "data": wel_rec, + "filename": "wel.1.txt" + }, + 2: { + "data": wel_rec, + "filename": "wel.2.bin", + "binary": True + } + } + + wel = flopy.mf6.ModflowGwfwel( + gwf, + stress_period_data=spd + ) + + chd_rec = [] + for cond, j in ((30, 0), (22, 9)): + for i in range(10): + chd_rec.append(((0, i, j), cond)) + + chd = flopy.mf6.ModflowGwfchd( + gwf, + stress_period_data={0: chd_rec} + ) + + arr = np.zeros((10, 10), dtype=int) + arr[0:5, :] = 1 + + mfsplit = flopy.mf6.utils.Mf6Splitter(sim) + new_sim = mfsplit.split_model(arr) + + ml1 = new_sim.get_model("model_1") + + kls = ml1.npf.k._data_storage.layer_storage.multi_dim_list + if kls[0].data_storage_type.value != 2: + raise AssertionError("Constants not being preserved for MFArray") + + if kls[1].data_storage_type.value != 3 or kls[1].binary: + raise AssertionError( + "External ascii files not being preserved for MFArray" + ) + + k33ls = ml1.npf.k33._data_storage.layer_storage.multi_dim_list + if k33ls[1].data_storage_type.value != 3 or not k33ls[1].binary: + raise AssertionError( + "Binary file input not being preserved for MFArray" + ) + + spd_ls1 = ml1.wel.stress_period_data._data_storage[1].layer_storage.multi_dim_list[0] + spd_ls2 = ml1.wel.stress_period_data._data_storage[2].layer_storage.multi_dim_list[0] + + if spd_ls1.data_storage_type.value != 3 or spd_ls1.binary: + raise AssertionError( + "External ascii files not being preserved for MFList" + ) + + if spd_ls2.data_storage_type.value != 3 or not spd_ls2.binary: + raise AssertionError( + "External binary file input not being preseved for MFList" + ) + + diff --git a/flopy/mf6/utils/model_splitter.py b/flopy/mf6/utils/model_splitter.py index 6b1905dec4..1c75c6c39c 100644 --- a/flopy/mf6/utils/model_splitter.py +++ b/flopy/mf6/utils/model_splitter.py @@ -1,5 +1,6 @@ import inspect import json +import warnings import numpy as np @@ -366,6 +367,7 @@ def optimize_splitting_mask(self, nparts): adv_pkg_weights = np.zeros((ncpl,), dtype=int) lak_array = np.zeros((ncpl,), dtype=int) laks = [] + hfbs = [] for _, package in self._model.package_dict.items(): if isinstance( package, @@ -373,8 +375,13 @@ def optimize_splitting_mask(self, nparts): modflow.ModflowGwfsfr, modflow.ModflowGwfuzf, modflow.ModflowGwflak, + modflow.ModflowGwfhfb, ), ): + if isinstance(package, modflow.ModflowGwfhfb): + hfbs.append(package) + continue + if isinstance(package, modflow.ModflowGwflak): cellids = package.connectiondata.array.cellid else: @@ -410,6 +417,23 @@ def optimize_splitting_mask(self, nparts): mnum = np.unique(membership[idx])[0] membership[idx] = mnum + if hfbs: + for hfb in hfbs: + for recarray in hfb.stress_period_data.data.values(): + cellids1 = recarray.cellid1 + cellids2 = recarray.cellid2 + _, nodes1 = self._cellid_to_layer_node(cellids1) + _, nodes2 = self._cellid_to_layer_node(cellids2) + mnums1 = membership[nodes1] + mnums2 = membership[nodes2] + ev = np.equal(mnums1, mnums2) + if np.alltrue(ev): + continue + idx = np.where(~ev)[0] + mnum_to = mnums1[idx] + adj_nodes = nodes2[idx] + membership[adj_nodes] = mnum_to + return membership.reshape(shape) def reconstruct_array(self, arrays): @@ -616,6 +640,24 @@ def _remap_nodes(self, array): """ array = np.ravel(array) + idomain = self._modelgrid.idomain.reshape((-1, self._ncpl)) + mkeys = np.unique(array) + bad_keys = [] + for mkey in mkeys: + count = 0 + mask = np.where(array == mkey) + for arr in idomain: + check = arr[mask] + count += np.count_nonzero(check) + if count == 0: + bad_keys.append(mkey) + + if bad_keys: + raise KeyError( + f"{bad_keys} are not in the active model extent; " + f"please adjust the model splitting array" + ) + if self._modelgrid.iverts is None: self._map_iac_ja_connections() else: @@ -1012,7 +1054,49 @@ def _remap_disu(self, mapped_data): return mapped_data - def _remap_array(self, item, mfarray, mapped_data): + def _remap_transient_array(self, item, mftransient, mapped_data): + """ + Method to split and remap transient arrays to new models + + Parameters + ---------- + item : str + variable name + mftransient : mfdataarray.MFTransientArray + transient array object + mapped_data : dict + dictionary of remapped package data + + Returns + ------- + dict + """ + if mftransient.array is None: + return mapped_data + + d0 = {mkey: {} for mkey in self._model_dict.keys()} + for per, array in enumerate(mftransient.array): + storage = mftransient._data_storage[per] + how = [ + i.data_storage_type.value + for i in storage.layer_storage.multi_dim_list + ] + binary = [i.binary for i in storage.layer_storage.multi_dim_list] + fnames = [i.fname for i in storage.layer_storage.multi_dim_list] + + d = self._remap_array( + item, array, mapped_data, how=how, binary=binary, fnames=fnames + ) + + for mkey in d.keys(): + d0[mkey][per] = d[mkey][item] + + for mkey, values in d0.items(): + mapped_data[mkey][item] = values + + return mapped_data + + def _remap_array(self, item, mfarray, mapped_data, **kwargs): """ Method to remap array nodes to each model @@ -1029,10 +1113,25 @@ def _remap_array(self, item, mfarray, mapped_data): ------- dict """ - if mfarray.array is None: - return mapped_data - + how = kwargs.pop("how", []) + binary = kwargs.pop("binary", []) + fnames = kwargs.pop("fnames", None) if not hasattr(mfarray, "size"): + if mfarray.array is None: + return mapped_data + + how = [ + i.data_storage_type.value + for i in mfarray._data_storage.layer_storage.multi_dim_list + ] + binary = [ + i.binary + for i in mfarray._data_storage.layer_storage.multi_dim_list + ] + fnames = [ + i.fname + for i in mfarray._data_storage.layer_storage.multi_dim_list + ] mfarray = mfarray.array nlay = 1 @@ -1068,11 +1167,46 @@ def _remap_array(self, item, mfarray, mapped_data): new_array[new_nodes] = original_arr[old_nodes] + if how and item != "idomain": + new_input = [] + i0 = 0 + i1 = new_ncpl + lay = 0 + for h in how: + if h == 1: + # internal array + new_input.append(new_array[i0:i1]) + elif h == 2: + # constant, parse the original array data + new_input.append(original_arr[ncpl * lay]) + else: + # external array + tmp = fnames[lay].split(".") + filename = f"{'.'.join(tmp[:-1])}.{mkey}.{tmp[-1]}" + + cr = { + "filename": filename, + "factor": 1, + "iprn": 1, + "data": new_array[i0:i1], + "binary": binary[lay], + } + + new_input.append(cr) + + i0 += new_ncpl + i1 += new_ncpl + lay += 1 + + new_array = new_input + mapped_data[mkey][item] = new_array return mapped_data - def _remap_mflist(self, item, mflist, mapped_data, transient=False): + def _remap_mflist( + self, item, mflist, mapped_data, transient=False, **kwargs + ): """ Method to remap mflist data to each model @@ -1087,15 +1221,23 @@ def _remap_mflist(self, item, mflist, mapped_data, transient=False): transient : bool flag to indicate this is transient stress period data flag is needed to trap for remapping mover data. + Returns ------- dict """ mvr_remap = {} + how = kwargs.pop("how", 1) + binary = kwargs.pop("binary", False) + fname = kwargs.pop("fname", None) if hasattr(mflist, "array"): if mflist.array is None: return mapped_data recarray = mflist.array + how = mflist._data_storage._data_storage_type.value + binary = mflist._data_storage.layer_storage.multi_dim_list[ + 0 + ].binary else: recarray = mflist @@ -1123,6 +1265,16 @@ def _remap_mflist(self, item, mflist, mapped_data, transient=False): new_recarray = recarray[idx] new_recarray["cellid"] = new_cellids + if how == 3 and new_recarray is not None: + tmp = fname.split(".") + filename = f"{'.'.join(tmp[:-1])}.{mkey}.{tmp[-1]}" + + new_recarray = { + "data": new_recarray, + "binary": binary, + "filename": filename, + } + mapped_data[mkey][item] = new_recarray if not transient: @@ -2539,9 +2691,28 @@ def _remap_transient_list(self, item, mftransientlist, mapped_data): mapped_data[mkey][item] = None return mapped_data + how = { + p: i.layer_storage.multi_dim_list[0].data_storage_type.value + for p, i in mftransientlist._data_storage.items() + } + binary = { + p: i.layer_storage.multi_dim_list[0].binary + for p, i in mftransientlist._data_storage.items() + } + fnames = { + p: i.layer_storage.multi_dim_list[0].fname + for p, i in mftransientlist._data_storage.items() + } + for per, recarray in mftransientlist.data.items(): d, mvr_remaps = self._remap_mflist( - item, recarray, mapped_data, transient=True + item, + recarray, + mapped_data, + transient=True, + how=how[per], + binary=binary[per], + fname=fnames[per], ) for mkey in self._model_dict.keys(): if mapped_data[mkey][item] is None: @@ -2714,6 +2885,11 @@ def _remap_package(self, package, ismvr=False): for k, v in self._offsets[mkey].items(): mapped_data[mkey][k] = v + elif isinstance(value, mfdataarray.MFTransientArray): + mapped_data = self._remap_transient_array( + item, value, mapped_data + ) + elif isinstance(value, mfdataarray.MFArray): mapped_data = self._remap_array(item, value, mapped_data) From 021ffaa8c406cda742f239673c4d1a802021b8b9 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Dav=C3=ADd=20Brakenhoff?= Date: Mon, 21 Aug 2023 23:38:57 +0200 Subject: [PATCH 044/101] fix(gridintersect): add multilinestring tests (#1924) - tests for example in #1922 --- autotest/test_gridintersect.py | 52 ++++++++++++++++++++++++++++++++++ 1 file changed, 52 insertions(+) diff --git a/autotest/test_gridintersect.py b/autotest/test_gridintersect.py index 498320e53b..ff61b229d7 100644 --- a/autotest/test_gridintersect.py +++ b/autotest/test_gridintersect.py @@ -425,6 +425,22 @@ def test_rect_grid_multilinestring_in_one_cell(): assert result.cellids[0] == (1, 0) +@requires_pkg("shapely") +def test_rect_grid_multilinestring_in_multiple_cells(): + gr = get_rect_grid() + ix = GridIntersect(gr, method="structured") + result = ix.intersect( + MultiLineString( + [ + LineString([(20.0, 0.0), (7.5, 12.0), (2.5, 7.0), (0.0, 4.5)]), + LineString([(5.0, 19.0), (2.5, 7.0)]), + ] + ) + ) + assert len(result) == 3 + assert np.allclose(sum(result.lengths), 40.19197584109293) + + @requires_pkg("shapely") def test_rect_grid_linestring_in_and_out_of_cell(): gr = get_rect_grid() @@ -537,6 +553,23 @@ def test_rect_grid_multilinestring_in_one_cell_shapely(rtree): assert result.cellids[0] == (1, 0) +@requires_pkg("shapely") +@rtree_toggle +def test_rect_grid_multilinestring_in_multiple_cells_shapely(rtree): + gr = get_rect_grid() + ix = GridIntersect(gr, method="vertex", rtree=rtree) + result = ix.intersect( + MultiLineString( + [ + LineString([(20.0, 0.0), (7.5, 12.0), (2.5, 7.0), (0.0, 4.5)]), + LineString([(5.0, 19.0), (2.5, 7.0)]), + ] + ) + ) + assert len(result) == 3 + assert np.allclose(sum(result.lengths), 40.19197584109293) + + @requires_pkg("shapely") @rtree_toggle def test_rect_grid_linestring_in_and_out_of_cell_shapely(rtree): @@ -655,6 +688,25 @@ def test_tri_grid_multilinestring_in_one_cell(rtree): assert result.cellids[0] == 4 +@requires_pkg("shapely") +@rtree_toggle +def test_tri_grid_multilinestring_in_multiple_cells(rtree): + gr = get_tri_grid() + if gr == -1: + return + ix = GridIntersect(gr, rtree=rtree) + result = ix.intersect( + MultiLineString( + [ + LineString([(20.0, 0.0), (7.5, 12.0), (2.5, 7.0), (0.0, 4.5)]), + LineString([(5.0, 19.0), (2.5, 7.0)]), + ] + ) + ) + assert len(result) == 5 + assert np.allclose(sum(result.lengths), 40.19197584109293) + + @requires_pkg("shapely") @rtree_toggle def test_tri_grid_linestrings_on_boundaries_return_all_ix(rtree): From 6cda49cf8f5ffbbb8d87561704bd844992036ec4 Mon Sep 17 00:00:00 2001 From: w-bonelli Date: Fri, 25 Aug 2023 13:11:58 -0400 Subject: [PATCH 045/101] docs: miscellaneous fixes/cleanup (#1932) * pin sphinx==7.1.2 until nbsphinx gallery issue fixed * remove unused pydata-sphinx-theme * remove nbsphinx-link dependency * minor cleanup to conf.py * update main.rst --- .docs/conf.py | 17 ++++++----------- .docs/main.rst | 2 +- pyproject.toml | 4 +--- 3 files changed, 8 insertions(+), 15 deletions(-) diff --git a/.docs/conf.py b/.docs/conf.py index f502cebefc..197732cd08 100644 --- a/.docs/conf.py +++ b/.docs/conf.py @@ -11,7 +11,6 @@ # documentation root, use os.path.abspath to make it absolute, like shown here. # import os -import subprocess import sys from pathlib import Path @@ -104,12 +103,13 @@ # -- convert tutorial scripts and run example notebooks ---------------------- if not on_rtd: - nbs = Path("Notebooks").glob("*.py") - for nb in nbs: - if nb.with_suffix(".ipynb").exists(): - print(f"{nb} already exists, skipping") + nbs_py = Path("Notebooks").glob("*.py") + for py in nbs_py: + ipynb = py.with_suffix(".ipynb") + if ipynb.exists(): + print(f"{ipynb} already exists, skipping") continue - cmd = ("jupytext", "--to", "ipynb", "--execute", str(nb)) + cmd = ("jupytext", "--to", "ipynb", "--execute", str(py)) print(" ".join(cmd)) os.system(" ".join(cmd)) @@ -142,7 +142,6 @@ "IPython.sphinxext.ipython_console_highlighting", # lowercase didn't work "sphinx.ext.autosectionlabel", "nbsphinx", - "nbsphinx_link", "recommonmark", ] @@ -223,10 +222,6 @@ "doc_path": "doc", } -html_css_files = [ - "css/custom.css", -] - # A shorter title for the navigation bar. Default is the same as html_title. html_short_title = "flopy" html_favicon = "_images/flopylogo_sm.png" diff --git a/.docs/main.rst b/.docs/main.rst index f81f54a218..b275a34c32 100644 --- a/.docs/main.rst +++ b/.docs/main.rst @@ -5,7 +5,7 @@ .. image:: _images/flopylogo.png -**Documentation for version 3.3.7 --- release candidate** +**Documentation for version 3.5.0.dev0** Documentation is generated with Sphinx from the `FloPy repository `_. diff --git a/pyproject.toml b/pyproject.toml index 12750e0a6b..45544c3a58 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -83,12 +83,10 @@ doc = [ "ipython[kernel]", "jupytext", "nbsphinx", - "nbsphinx-link", - "pydata-sphinx-theme", "PyYAML", "recommonmark", "rtds-action", - "sphinx >=4", + "sphinx ==7.1.2", "sphinx-rtd-theme >=1", ] From 66b0903f4812b6cef3020e7cdfd6e2c2d26f6d52 Mon Sep 17 00:00:00 2001 From: scottrp <45947939+scottrp@users.noreply.github.com> Date: Fri, 25 Aug 2023 10:55:30 -0700 Subject: [PATCH 046/101] fix(binary file): Was writing binary file information twice to external files (#1925) (#1928) --- flopy/mf6/data/mffileaccess.py | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/flopy/mf6/data/mffileaccess.py b/flopy/mf6/data/mffileaccess.py index 653f54a29b..bfadbde27d 100644 --- a/flopy/mf6/data/mffileaccess.py +++ b/flopy/mf6/data/mffileaccess.py @@ -215,6 +215,7 @@ def write_binary_file( if data.size == modelgrid.nnodes: write_multi_layer = False if write_multi_layer: + # write data from each layer with a separate header for layer, value in enumerate(data): self._write_layer( fd, @@ -228,6 +229,7 @@ def write_binary_file( layer + 1, ) else: + # write data with a single header self._write_layer( fd, data, @@ -238,7 +240,6 @@ def write_binary_file( text, fname, ) - data.tofile(fd) fd.close() def _write_layer( From 51b2ecccaf0e4759549ef568e1fdcb8795540970 Mon Sep 17 00:00:00 2001 From: w-bonelli Date: Fri, 25 Aug 2023 18:36:26 -0400 Subject: [PATCH 047/101] docs: update markdown parser, add sections to RTD site (#1934) * replace recommonmark with myst-parser (supports md tables) * move markdown files from docs/ to .docs to include in RTD site --- .docs/conf.py | 4 +- .docs/faq.rst | 2 +- .docs/introduction.rst | 6 +- .docs/main.rst | 3 + {docs => .docs/md}/generate_classes.md | 8 +++ {docs => .docs/md}/get_modflow.md | 0 {docs => .docs/md}/model_checks.md | 17 +++-- .../md/optional_dependencies.md | 6 +- {docs => .docs/md}/version_changes.md | 2 + .docs/utilities.rst | 13 ++++ .github/workflows/release.yml | 11 ++- DEVELOPER.md | 12 ++-- README.md | 4 +- docs/mf6.md | 71 ------------------- flopy/utils/utl_import.py | 2 +- pyproject.toml | 2 +- 16 files changed, 69 insertions(+), 94 deletions(-) rename {docs => .docs/md}/generate_classes.md (92%) rename {docs => .docs/md}/get_modflow.md (100%) rename {docs => .docs/md}/model_checks.md (87%) rename docs/flopy_method_dependencies.md => .docs/md/optional_dependencies.md (93%) rename {docs => .docs/md}/version_changes.md (99%) create mode 100644 .docs/utilities.rst delete mode 100644 docs/mf6.md diff --git a/.docs/conf.py b/.docs/conf.py index 197732cd08..dd3b87f92c 100644 --- a/.docs/conf.py +++ b/.docs/conf.py @@ -142,7 +142,7 @@ "IPython.sphinxext.ipython_console_highlighting", # lowercase didn't work "sphinx.ext.autosectionlabel", "nbsphinx", - "recommonmark", + "myst_parser", ] # Settings for GitHub actions integration @@ -164,7 +164,7 @@ # directories to ignore when looking for source files. # This pattern also affects html_static_path and html_extra_path. exclude_patterns = ["_build", "Thumbs.db", ".DS_Store", "**.ipynb_checkpoints"] -source_suffix = ".rst" +source_suffix = {".rst": "restructuredtext", ".md": "markdown"} # The encoding of source files. source_encoding = "utf-8" diff --git a/.docs/faq.rst b/.docs/faq.rst index 66291a2e57..dc3f363cf6 100644 --- a/.docs/faq.rst +++ b/.docs/faq.rst @@ -26,4 +26,4 @@ You can get a list of the directories in your system path using: os.getenv("PATH") -There is a `get-modflow` command line utility that can be used to download MODFLOW executables. See the `get-modflow documentation `_ for more information. +There is a `get-modflow` command line utility that can be used to download MODFLOW executables. See the `get-modflow documentation `_ for more information. diff --git a/.docs/introduction.rst b/.docs/introduction.rst index a43501f8b2..82e006967e 100644 --- a/.docs/introduction.rst +++ b/.docs/introduction.rst @@ -23,13 +23,17 @@ functionality for MODFLOW 6, helper functions to use GIS shapefiles and raster files to create MODFLOW datasets, and common plotting and export functionality. +FloPy provides separate APIs for interacting with MF6 and non-MF6 models. +MODFLOW 6 class definitions are automatically generated from definition +(DFN) files, text files describing the format of MF6 input files. + FloPy is an open-source project and any assistance is welcomed. Please email the development team if you want to contribute. Return to the Github `FloPy `_ website. Installation ------------------- +------------ FloPy can be installed using conda (from the conda-forge channel) or pip. diff --git a/.docs/main.rst b/.docs/main.rst index b275a34c32..a2568d3fce 100644 --- a/.docs/main.rst +++ b/.docs/main.rst @@ -15,8 +15,11 @@ Contents: :maxdepth: 2 introduction + md/version_changes + md/optional_dependencies tutorials examples + utilities faq code diff --git a/docs/generate_classes.md b/.docs/md/generate_classes.md similarity index 92% rename from docs/generate_classes.md rename to .docs/md/generate_classes.md index 70ffa40851..7056918174 100644 --- a/docs/generate_classes.md +++ b/.docs/md/generate_classes.md @@ -1,5 +1,13 @@ # Generating FloPy classes + + + +- [Generating FloPy classes](#generating-flopy-classes) + - [Testing class generation](#testing-class-generation) + + + MODFLOW 6 input continues to evolve as new models, packages, and options are developed, updated, and supported. All MODFLOW 6 input is described by DFN (definition) files, which are simple text files that describe the blocks and keywords in each input file. These definition files are used to build the input and output guide for MODFLOW 6. These definition files are also used to automatically generate FloPy classes for creating, reading and writing MODFLOW 6 models, packages, and options. FloPy and MODFLOW 6 are kept in sync by these DFN (definition) files, and therefore, it may be necessary for a user to update FloPy using a custom set of definition files, or a set of definition files from a previous release. The FloPy classes for MODFLOW 6 are largely generated by a utility which converts DFN files in a modflow6 repository on GitHub or on the local machine into Python source files in your local FloPy install. For instance (output much abbreviated): diff --git a/docs/get_modflow.md b/.docs/md/get_modflow.md similarity index 100% rename from docs/get_modflow.md rename to .docs/md/get_modflow.md diff --git a/docs/model_checks.md b/.docs/md/model_checks.md similarity index 87% rename from docs/model_checks.md rename to .docs/md/model_checks.md index ad187280e7..d3b9583cd0 100644 --- a/docs/model_checks.md +++ b/.docs/md/model_checks.md @@ -1,7 +1,16 @@ -FloPy Model Checks ------------------------------------------------ +# FloPy Model Checks -## List of available FloPy model checks + + + +- [FloPy Model Checks](#flopy-model-checks) +- [List of available FloPy model checks](#list-of-available-flopy-model-checks) +- [Visualizations](#visualizations) +- [Additional model checks and visualizations](#additional-model-checks-and-visualizations) + + + +## Available model checks |Package | Check | Implemented | Type | | :-----------| :------------| :------------------ | :-------------| @@ -50,7 +59,7 @@ FloPy Model Checks | SFR | gaps in segment and reach routing | Not supported | Warning | | SFR | outlets in interior of model domain | Not supported | Warning | | WEL | PHIRAMP is < 1 and should be close to recommended value of 0.001 | Not supported | Warning | -| MPSIM | invalid stop times | Supported | +| MPSIM | invalid stop times | Supported | | ## Visualizations diff --git a/docs/flopy_method_dependencies.md b/.docs/md/optional_dependencies.md similarity index 93% rename from docs/flopy_method_dependencies.md rename to .docs/md/optional_dependencies.md index aa1ac03687..bf70055cbd 100644 --- a/docs/flopy_method_dependencies.md +++ b/.docs/md/optional_dependencies.md @@ -1,7 +1,9 @@ -Additional dependencies to use optional FloPy helper methods are listed below. +# Optional dependencies + +Additional dependencies to use optional FloPy helper methods are listed below. These may be installed with `pip install flopy[optional]`. | Method | Python Package | -| ------------------------------------------------------------------------------------ |--------------------------------------------------------------------------| +|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------| | `.plot_shapefile()` | **Pyshp** >= 2.0.0 | | `.to_shapefile()` | **Pyshp** >= 2.0.0 | | `.export(*.shp)` | **Pyshp** >= 2.0.0 | diff --git a/docs/version_changes.md b/.docs/md/version_changes.md similarity index 99% rename from docs/version_changes.md rename to .docs/md/version_changes.md index 934d29fc23..a96f44d1e4 100644 --- a/docs/version_changes.md +++ b/.docs/md/version_changes.md @@ -1,3 +1,5 @@ +# Changelog + ### Version 3.4.1 #### Bug fixes diff --git a/.docs/utilities.rst b/.docs/utilities.rst new file mode 100644 index 0000000000..0bb0730e1d --- /dev/null +++ b/.docs/utilities.rst @@ -0,0 +1,13 @@ +Utilities +========= + +The FloPy package installs several command-line utilities that can be used to +help with installing MODFLOW 6 and other binaries, and regenerate FloPy class +definitions for compatibility with MODFLOW 6. + +.. toctree:: + :maxdepth: 2 + + md/generate_classes + md/get_modflow + md/model_checks diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml index dbfddeeea0..9b0432c254 100644 --- a/.github/workflows/release.yml +++ b/.github/workflows/release.yml @@ -105,9 +105,14 @@ jobs: sed -i 's/#### Refactor/#### Refactoring/' CHANGELOG.md sed -i 's/#### Test/#### Testing/' CHANGELOG.md - # prepend changelog to docs/version_changes.md - cat CHANGELOG.md docs/version_changes.md > version_changes.md - sudo mv version_changes.md docs/version_changes.md + # prepend changelog to .docs/md/version_changes.md + file=".docs/md/version_changes.md" + temp="version_changes.md" + clog=$(cat CHANGELOG.md) + echo "$(tail -n +2 $file)" > $file + cat $clog > $temp + sudo mv $temp $file + sed -i '1i # Changelog' $file - name: Upload changelog uses: actions/upload-artifact@v3 diff --git a/DEVELOPER.md b/DEVELOPER.md index ad1d04c76b..cda40aa717 100644 --- a/DEVELOPER.md +++ b/DEVELOPER.md @@ -68,7 +68,7 @@ Alternatively, with Anaconda or Miniconda: conda env create -f etc/environment.yml conda activate flopy -The `flopy` package has a number of [optional dependencies](docs/flopy_method_dependencies.md), as well as extra dependencies required for linting, testing, and building documentation. Extra dependencies are listed in the `test`, `lint`, `optional`, and `doc` groups under the `[project.optional-dependencies]` section in `pyproject.toml`. Core, linting, testing and optional dependencies are included in the Conda environment in `etc/environment.yml`. Only core dependencies are included in the PyPI package — to install extra dependency groups with pip, use `pip install ".[]"`. For instance, to install all extra dependency groups: +The `flopy` package has a number of [optional dependencies](.docs/optional_dependencies.md), as well as extra dependencies required for linting, testing, and building documentation. Extra dependencies are listed in the `test`, `lint`, `optional`, and `doc` groups under the `[project.optional-dependencies]` section in `pyproject.toml`. Core, linting, testing and optional dependencies are included in the Conda environment in `etc/environment.yml`. Only core dependencies are included in the PyPI package — to install extra dependency groups with pip, use `pip install ".[]"`. For instance, to install all extra dependency groups: pip install ".[test, lint, optional, doc]" @@ -97,7 +97,7 @@ To develop `flopy` you will need a number of MODFLOW executables installed. #### Scripted installation -A utility script is provided to easily download and install executables: after installing `flopy`, just run `get-modflow` (see the script's [documentation](docs/get_modflow.md) for more info). +A utility script is provided to easily download and install executables: after installing `flopy`, just run `get-modflow` (see the script's [documentation](.docs/md/get_modflow.md) for more info). #### Manually installing executables @@ -141,7 +141,7 @@ This can be fixed by running `Install Certificates.command` in your Python insta ### Updating FloPy packages -FloPy must be up-to-date with the version of MODFLOW 6 and other executables it is being used with. Synchronization is achieved via "definition" (DFN) files, which define the format of MODFLOW6 inputs and outputs. FloPy contains Python source code automatically generated from DFN files. This is done with the `generate_classes` function in `flopy.mf6.utils`. See [this document](./docs/generate_classes.md) for usage examples. +FloPy must be up-to-date with the version of MODFLOW 6 and other executables it is being used with. Synchronization is achieved via "definition" (DFN) files, which define the format of MODFLOW6 inputs and outputs. FloPy contains Python source code automatically generated from DFN files. This is done with the `generate_classes` function in `flopy.mf6.utils`. See [this document](./.docs/generate_classes.md) for usage examples. ## Examples @@ -153,7 +153,7 @@ Tutorial scripts are located in `examples/scripts` and `examples/Tutorials`. The Each script be invoked by name with Python per usual. The scripts can also be converted to notebooks with `jupytext`. By default, all scripts create and attempt to clean up temporary working directories. (On Windows, Python's `TemporaryDirectory` can raise permissions errors, so cleanup is trapped with `try/except`.) Some scripts also accept a `--quiet` flag, curtailing verbose output, and a `--keep` option to specify a working directory of the user's choice. -Some of the scripts use [optional dependencies](docs/flopy_method_dependencies.md). If you're using `pip`, make sure these have been installed with `pip install ".[optional]"`. The conda environment provided in `etc/environment.yml` already includes all optional dependencies. +Some of the scripts use [optional dependencies](.docs/optional_dependencies.md). If you're using `pip`, make sure these have been installed with `pip install ".[optional]"`. The conda environment provided in `etc/environment.yml` already includes all optional dependencies. ### Notebooks @@ -167,7 +167,7 @@ To convert a paired Python script to an `.ipynb` notebook, run: jupytext --from py --to ipynb path/to/notebook ``` -Notebook scripts can be run like any other Python script. To run `.ipynb` notebooks, you will need `jupyter` installed (`jupyter` is included with the `test` optional dependency group in `pyproject.toml`). Some of the notebooks use [optional dependencies](docs/flopy_method_dependencies.md) as well. +Notebook scripts can be run like any other Python script. To run `.ipynb` notebooks, you will need `jupyter` installed (`jupyter` is included with the `test` optional dependency group in `pyproject.toml`). Some of the notebooks use [optional dependencies](.docs/optional_dependencies.md) as well. To install jupyter and optional dependencies at once: @@ -233,7 +233,7 @@ Some tests require environment variables. Currently the following variables are - `GITHUB_TOKEN` -The `GITHUB_TOKEN` variable is needed because the [`get-modflow`](docs/get_modflow.md) utility invokes the GitHub API — to avoid rate-limiting, requests to the GitHub API should bear an [authentication token](https://github.com/settings/tokens). A token is automatically provided to GitHub Actions CI jobs via the [`github` context's](https://docs.github.com/en/actions/learn-github-actions/contexts#github-context) `token` attribute, however a personal access token is needed to run the tests locally. To create a personal access token, go to [GitHub -> Settings -> Developer settings -> Personal access tokens -> Tokens (classic)](https://github.com/settings/tokens). The `get-modflow` utility automatically detects and uses the `GITHUB_TOKEN` environment variable if available. +The `GITHUB_TOKEN` variable is needed because the [`get-modflow`](.docs/md/get_modflow.md) utility invokes the GitHub API — to avoid rate-limiting, requests to the GitHub API should bear an [authentication token](https://github.com/settings/tokens). A token is automatically provided to GitHub Actions CI jobs via the [`github` context's](https://docs.github.com/en/actions/learn-github-actions/contexts#github-context) `token` attribute, however a personal access token is needed to run the tests locally. To create a personal access token, go to [GitHub -> Settings -> Developer settings -> Personal access tokens -> Tokens (classic)](https://github.com/settings/tokens). The `get-modflow` utility automatically detects and uses the `GITHUB_TOKEN` environment variable if available. Environment variables can be set as usual, but a more convenient way to store variables for all future sessions is to create a text file called `.env` in the `autotest` directory, containing variables in `NAME=VALUE` format, one on each line. [`pytest-dotenv`](https://github.com/quiqua/pytest-dotenv) will detect and add these to the environment provided to the test process. All `.env` files in the project are ignored in `.gitignore` so there is no danger of checking in secrets unless the file is misnamed. diff --git a/README.md b/README.md index e293049cc9..113a79949e 100644 --- a/README.md +++ b/README.md @@ -34,7 +34,7 @@ Documentation Installation ----------------------------------------------- -FloPy requires **Python** 3.8 (or higher), **NumPy** 1.15.0 (or higher), and **matplotlib** 1.4.0 (or higher). Dependencies for optional FloPy methods are summarized [here](docs/flopy_method_dependencies.md). +FloPy requires **Python** 3.8 (or higher), **NumPy** 1.15.0 (or higher), and **matplotlib** 1.4.0 (or higher). Dependencies for optional FloPy methods are summarized [here](.docs/optional_dependencies.md). To install FloPy type: @@ -51,7 +51,7 @@ After FloPy is installed, MODFLOW and related programs can be installed using th get-modflow :flopy -See documentation [get_modflow.md](https://github.com/modflowpy/flopy/blob/develop/docs/get_modflow.md) for more information. +See [documentation](.docs/md/get_modflow.md) for more information. Getting Started diff --git a/docs/mf6.md b/docs/mf6.md deleted file mode 100644 index 29f2492cb3..0000000000 --- a/docs/mf6.md +++ /dev/null @@ -1,71 +0,0 @@ -Introduction ------------------------------------------------ -[MODFLOW 6](https://water.usgs.gov/ogw/modflow/MODFLOW.html) is the latest core release of the U.S. Geological Survey's MODFLOW model. MODFLOW 6 combines many of the capabilities of previous MODFLOW versions into a single code. A major change in MODFLOW 6 is the redesign of the input file structure to use blocks and keywords. Because of this change, existing Flopy classes can no longer be used. - -This Flopy release contains beta support for an entirely new set of classes to support MODFLOW 6 models. These classes were designed from scratch and are based on "definition files", which describe the block and keywords for each input file. These new MODFLOW 6 capabilities in Flopy are considered beta, because they need additional testing, and because it is likely that some of the underlying code will change to meet user needs. - -There are many important differences between the Flopy classes for MODFLOW 6 and the Flopy classes for previous MODFLOW versions. Unfortunately, this will mean that existing Flopy users will need to rewrite some parts of their scripts in order to generate MODFLOW 6 models. Because the new classes are updated programmatically from definition files, it will be easier to keep them up-to-date with new MODFLOW 6 capabilities as they are released. - - -Models and Packages ------------------------------------------------ -The following is a list of the classes available to work with MODFLOW 6 models. These classes should support all of the options available in the current version of MODFLOW 6. - -* MFSimulation -* ModflowGwf -* ModflowNam -* ModflowTdis -* ModflowGwfgwf -* ModflowGwfnam -* ModflowGwfdis -* ModflowGwfdisv -* ModflowGwfdisu -* ModflowGwfic -* ModflowGwfnpf -* ModflowGwfsto -* ModflowGwfhfb -* ModflowGwfchd -* ModflowGwfwel -* ModflowGwfdrn -* ModflowGwfriv -* ModflowGwfghb -* ModflowGwfrch -* ModflowGwfrcha -* ModflowGwfevt -* ModflowGwfevta -* ModflowGwfmaw -* ModflowGwfsfr -* ModflowGwflak -* ModflowGwfuzf -* ModflowGwfmvr -* ModflowGwfgnc -* ModflowGwfoc -* ModflowIms -* ModflowMvr -* ModflowGnc -* ModflowUtlobs -* ModflowUtlts -* ModflowUtltab -* ModflowUtltas - - -Getting Help ------------------------------------------------ -Help for these classes is limited at the moment. The best way to understand these new capabilities is to look at the jupyter Notebooks. There are also several scripts available in the flopy/autotest folder that may provide additional information on creating, writing, and loading MODFLOW 6 models with Flopy. - - -jupyter Notebooks ------------------------------------------------ -Instructions for using the new MODFLOW 6 capabilities in Flopy are described in the following Jupyter Notebooks. - -1. Tutorial of MODFLOW 6 capabilities in Flopy ([flopy3_mf6_tutorial](../.docs/Notebooks/flopy3_mf6_tutorial.ipynb) Notebook) -2. Building the simple lake model ([flopy3_mf6_A_simple-model](../.docs/Notebooks/flopy3_mf6_A_simple-model.ipynb) Notebook) -3. Building a complex model ([flopy3_mf6_B_complex-model](../.docs/Notebooks/flopy3_mf6_B_complex-model.ipynb) Notebook) -4. Building an LGR model (coming soon) - - -Future Plans ------------------------------------------------ -- Improved documentation -- New plotting capabilities and refactoring of the current spatial reference class -- Support for exporting to netcdf, vtk, and other formats diff --git a/flopy/utils/utl_import.py b/flopy/utils/utl_import.py index 166102814c..a7943e6b50 100644 --- a/flopy/utils/utl_import.py +++ b/flopy/utils/utl_import.py @@ -44,7 +44,7 @@ from .parse_version import Version -# Update docs/flopy_method_dependencies.md when updating versions! +# Update .docs/optional_dependencies.md when updating versions! VERSIONS = { "shapefile": "2.0.0", diff --git a/pyproject.toml b/pyproject.toml index 45544c3a58..95ecd5ac78 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -82,9 +82,9 @@ doc = [ "flopy[optional]", "ipython[kernel]", "jupytext", + "myst-parser", "nbsphinx", "PyYAML", - "recommonmark", "rtds-action", "sphinx ==7.1.2", "sphinx-rtd-theme >=1", From 5d57fb78b27238ea69adbdb3438c8e17db55ddad Mon Sep 17 00:00:00 2001 From: w-bonelli Date: Fri, 25 Aug 2023 18:54:53 -0400 Subject: [PATCH 048/101] fix(ParticleData): fix docstring, structured default is False (#1935) --- flopy/modpath/mp7particledata.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/flopy/modpath/mp7particledata.py b/flopy/modpath/mp7particledata.py index 9da7e57490..9f8b8b393d 100644 --- a/flopy/modpath/mp7particledata.py +++ b/flopy/modpath/mp7particledata.py @@ -24,7 +24,7 @@ class ParticleData: locations or nodes. structured : bool Boolean defining if a structured (True) or unstructured - particle recarray will be created (default is True). + particle recarray will be created (default is False). particleids : list, tuple, or np.ndarray Particle ids for the defined particle locations. If particleids is None, MODPATH 7 will define the particle ids to each particle From 4f6cd47411b96460495b1f7d6582860a4d655060 Mon Sep 17 00:00:00 2001 From: w-bonelli Date: Thu, 31 Aug 2023 16:13:23 -0400 Subject: [PATCH 049/101] fix(generate_classes): use branch arg if provided (#1938) * previously branch was ignored even if provided * add repo param to download_dfn() function * fix logging output * fix test --- autotest/test_generate_classes.py | 6 ++++-- flopy/mf6/utils/generate_classes.py | 18 ++++++++---------- 2 files changed, 12 insertions(+), 12 deletions(-) diff --git a/autotest/test_generate_classes.py b/autotest/test_generate_classes.py index fdd6b8ae80..cfbd4bd7a5 100644 --- a/autotest/test_generate_classes.py +++ b/autotest/test_generate_classes.py @@ -93,15 +93,17 @@ def test_generate_classes_from_github_refs( mod_file_times = [Path(mod_file).stat().st_mtime for mod_file in mod_files] pprint(mod_files) - # generate classes + # split ref into owner, repo, ref name spl = ref.split("/") owner = spl[0] repo = spl[1] ref = spl[2] + + # generate classes pprint( virtualenv.run( "python -m flopy.mf6.utils.generate_classes " - f"--owner {owner} --repo {repo} ref {ref} --no-backup" + f"--owner {owner} --repo {repo} --ref {ref} --no-backup" ) ) diff --git a/flopy/mf6/utils/generate_classes.py b/flopy/mf6/utils/generate_classes.py index 602ed8fbb5..32c1d6978c 100644 --- a/flopy/mf6/utils/generate_classes.py +++ b/flopy/mf6/utils/generate_classes.py @@ -51,7 +51,7 @@ def list_files(pth, exts=["py"]): print(f" {idx:5d} - {fn}") -def download_dfn(owner, branch, new_dfn_pth): +def download_dfn(owner, repo, ref, new_dfn_pth): try: from modflow_devtools.download import download_and_unzip except ImportError: @@ -61,8 +61,7 @@ def download_dfn(owner, branch, new_dfn_pth): " pip install modflow-devtools" ) - mf6url = "https://github.com/{}/modflow6/archive/{}.zip" - mf6url = mf6url.format(owner, branch) + mf6url = f"https://github.com/{owner}/{repo}/archive/{ref}.zip" print(f" Downloading MODFLOW 6 repository from {mf6url}") with tempfile.TemporaryDirectory() as tmpdirname: dl_path = download_and_unzip(mf6url, tmpdirname, verbose=True) @@ -159,20 +158,19 @@ def generate_classes( # download the dfn files and put them in flopy.mf6.data or update using # user provided dfnpath if dfnpath is None: - print( - f" Updating the MODFLOW 6 classes using {owner}/{repo}/{branch}" - ) timestr = time.strftime("%Y%m%d-%H%M%S") new_dfn_pth = os.path.join(flopypth, "mf6", "data", f"dfn_{timestr}") # branch deprecated 3.5.0 - if not ref: - if not branch: - raise ValueError("branch or ref must be provided") + if not ref and not branch: + raise ValueError("branch or ref must be provided") + if branch: warn("branch is deprecated, use ref instead", DeprecationWarning) ref = branch - download_dfn(owner, ref, new_dfn_pth) + print(f" Updating the MODFLOW 6 classes using {owner}/{repo}/{ref}") + + download_dfn(owner, repo, ref, new_dfn_pth) else: print(f" Updating the MODFLOW 6 classes using {dfnpath}") assert os.path.isdir(dfnpath) From db8da0545a577479a9db5ff23f195a016b8578a4 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Tue, 5 Sep 2023 14:31:22 +1200 Subject: [PATCH 050/101] chore(deps): bump actions/checkout from 3 to 4 (#1941) --- .github/workflows/benchmark.yml | 4 ++-- .github/workflows/commit.yml | 8 ++++---- .github/workflows/examples.yml | 2 +- .github/workflows/mf6.yml | 4 ++-- .github/workflows/optional.yml | 2 +- .github/workflows/regression.yml | 2 +- .github/workflows/release.yml | 8 ++++---- .github/workflows/rtd.yml | 2 +- 8 files changed, 16 insertions(+), 16 deletions(-) diff --git a/.github/workflows/benchmark.yml b/.github/workflows/benchmark.yml index 37ed12e16c..e72415c424 100644 --- a/.github/workflows/benchmark.yml +++ b/.github/workflows/benchmark.yml @@ -24,7 +24,7 @@ jobs: steps: - name: Checkout repo - uses: actions/checkout@v3 + uses: actions/checkout@v4 - name: Setup Python if: runner.os != 'Windows' @@ -110,7 +110,7 @@ jobs: steps: - name: Checkout repo - uses: actions/checkout@v3 + uses: actions/checkout@v4 - name: Setup Python uses: actions/setup-python@v4 diff --git a/.github/workflows/commit.yml b/.github/workflows/commit.yml index 79decdc901..5e5163f5ce 100644 --- a/.github/workflows/commit.yml +++ b/.github/workflows/commit.yml @@ -20,7 +20,7 @@ jobs: timeout-minutes: 10 steps: - name: Checkout repo - uses: actions/checkout@v3 + uses: actions/checkout@v4 - name: Setup Python uses: actions/setup-python@v4 @@ -53,7 +53,7 @@ jobs: steps: - name: Checkout repo - uses: actions/checkout@v3 + uses: actions/checkout@v4 - name: Setup Python uses: actions/setup-python@v4 @@ -111,7 +111,7 @@ jobs: steps: - name: Checkout repo - uses: actions/checkout@v3 + uses: actions/checkout@v4 - name: Setup Python uses: actions/setup-python@v4 @@ -168,7 +168,7 @@ jobs: steps: - name: Checkout repo - uses: actions/checkout@v3 + uses: actions/checkout@v4 - name: Setup Python if: runner.os != 'Windows' diff --git a/.github/workflows/examples.yml b/.github/workflows/examples.yml index 62df8cf843..fe9b014b3b 100644 --- a/.github/workflows/examples.yml +++ b/.github/workflows/examples.yml @@ -23,7 +23,7 @@ jobs: timeout-minutes: 90 steps: - name: Checkout repo - uses: actions/checkout@v3 + uses: actions/checkout@v4 - name: Setup Python if: runner.os != 'Windows' diff --git a/.github/workflows/mf6.yml b/.github/workflows/mf6.yml index f00b13fe1b..584e4e80ad 100644 --- a/.github/workflows/mf6.yml +++ b/.github/workflows/mf6.yml @@ -27,7 +27,7 @@ jobs: steps: - name: Checkout flopy repo - uses: actions/checkout@v3 + uses: actions/checkout@v4 - name: Setup Python uses: actions/setup-python@v4 @@ -48,7 +48,7 @@ jobs: uses: modflowpy/install-gfortran-action@v1 - name: Checkout MODFLOW 6 - uses: actions/checkout@v3 + uses: actions/checkout@v4 with: repository: MODFLOW-USGS/modflow6 path: modflow6 diff --git a/.github/workflows/optional.yml b/.github/workflows/optional.yml index 094c92e6c4..ee29136542 100644 --- a/.github/workflows/optional.yml +++ b/.github/workflows/optional.yml @@ -22,7 +22,7 @@ jobs: steps: - name: Checkout repo - uses: actions/checkout@v3 + uses: actions/checkout@v4 - name: Setup Python uses: actions/setup-python@v4 diff --git a/.github/workflows/regression.yml b/.github/workflows/regression.yml index 706966521a..972f1e1650 100644 --- a/.github/workflows/regression.yml +++ b/.github/workflows/regression.yml @@ -23,7 +23,7 @@ jobs: timeout-minutes: 90 steps: - name: Checkout repo - uses: actions/checkout@v3 + uses: actions/checkout@v4 - name: Setup Python if: runner.os != 'Windows' diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml index 9b0432c254..e130f3a337 100644 --- a/.github/workflows/release.yml +++ b/.github/workflows/release.yml @@ -20,7 +20,7 @@ jobs: steps: - name: Checkout release branch - uses: actions/checkout@v3 + uses: actions/checkout@v4 with: fetch-depth: 0 @@ -144,7 +144,7 @@ jobs: steps: - name: Checkout release branch - uses: actions/checkout@v3 + uses: actions/checkout@v4 with: ref: ${{ github.ref_name }} @@ -173,7 +173,7 @@ jobs: steps: - name: Checkout master branch - uses: actions/checkout@v3 + uses: actions/checkout@v4 with: ref: master @@ -210,7 +210,7 @@ jobs: steps: - name: Checkout master branch - uses: actions/checkout@v3 + uses: actions/checkout@v4 with: ref: master diff --git a/.github/workflows/rtd.yml b/.github/workflows/rtd.yml index 5df817f687..b092e4b16e 100644 --- a/.github/workflows/rtd.yml +++ b/.github/workflows/rtd.yml @@ -29,7 +29,7 @@ jobs: steps: - name: Checkout flopy repo - uses: actions/checkout@v3 + uses: actions/checkout@v4 - name: Output repo information run: | From 71855bdfcd37e95194b6e5928ae909a6f7ef15fa Mon Sep 17 00:00:00 2001 From: scottrp <45947939+scottrp@users.noreply.github.com> Date: Thu, 14 Sep 2023 15:03:13 -0700 Subject: [PATCH 051/101] fix(remove_model): remove_model method fix and tests (#1945) --- autotest/regression/test_mf6.py | 13 ++++++++- autotest/test_mf6.py | 47 +++++++++++++++++++++++++++++++-- flopy/mf6/mfsimbase.py | 31 +++++++++++++++++++--- 3 files changed, 85 insertions(+), 6 deletions(-) diff --git a/autotest/regression/test_mf6.py b/autotest/regression/test_mf6.py index 5479014475..408165187e 100644 --- a/autotest/regression/test_mf6.py +++ b/autotest/regression/test_mf6.py @@ -3085,7 +3085,7 @@ def test028_create_tests_sfr(function_tmpdir, example_data_path): delc=5000.0, top=top, botm=botm, - #idomain=idomain, + idomain=idomain, filename=f"{model_name}.dis", ) strt = testutils.read_std_array(os.path.join(pth, "strt.txt"), "float") @@ -3989,6 +3989,17 @@ def test006_2models_different_dis(function_tmpdir, example_data_path): assert gnc_data[0][1] == (0, 0) assert gnc_data[0][2] == (0, 1, 1) + # test remove_model + sim2.remove_model(model_name_2) + sim2.write_simulation() + success, buff = sim2.run_simulation() + assert success + sim3 = MFSimulation.load(sim_ws=sim.sim_path) + assert sim3.get_model(model_name_1) is not None + assert sim3.get_model(model_name_2) is None + assert len(sim3.name_file.models.get_data()) == 1 + assert sim3.name_file.exchanges.get_data() is None + sim.delete_output_files() diff --git a/autotest/test_mf6.py b/autotest/test_mf6.py index 7bead499c9..71d231da40 100644 --- a/autotest/test_mf6.py +++ b/autotest/test_mf6.py @@ -2244,14 +2244,14 @@ def test_multi_model(function_tmpdir): @requires_exe("mf6") -def test_namefile_creation(tmpdir): +def test_namefile_creation(function_tmpdir): test_ex_name = "test_namefile" # build MODFLOW 6 files sim = MFSimulation( sim_name=test_ex_name, version="mf6", exe_name="mf6", - sim_ws=str(tmpdir), + sim_ws=str(function_tmpdir), ) tdis_rc = [(6.0, 2, 1.0), (6.0, 3, 1.0), (6.0, 3, 1.0), (6.0, 3, 1.0)] @@ -2293,3 +2293,46 @@ def test_namefile_creation(tmpdir): except flopy.mf6.mfbase.FlopyException: ex_happened = True assert ex_happened + + +def test_remove_model(function_tmpdir, example_data_path): + # load a multi-model simulation + sim_ws = str(example_data_path / "mf6" / "test006_2models_mvr") + sim = MFSimulation.load(sim_ws=sim_ws, exe_name="mf6") + + # original simulation should contain models: + # - 'parent', with files named 'model1.ext' + # - 'child', with files named 'model2.ext' + assert len(sim.model_names) == 2 + assert "parent" in sim.model_names + assert "child" in sim.model_names + + # remove the child model + sim.remove_model("child") + + # simulation should now only contain the parent model + assert len(sim.model_names) == 1 + assert "parent" in sim.model_names + + # write simulation input files + sim.set_sim_path(function_tmpdir) + sim.write_simulation() + + # there should be no input files for the child model + files = list(function_tmpdir.glob("*")) + assert not any("model2" in f.name for f in files) + + # there should be no model or solver entry for the child model in the simulation namefile + lines = open(function_tmpdir / "mfsim.nam").readlines() + lines = [l.lower().strip() for l in lines] + assert not any("model2" in l for l in lines) + assert not any("child" in l for l in lines) + + # there should be no exchanges either + exg_index = 0 + for i, l in enumerate(lines): + if "begin exchanges" in l: + exg_index = i + elif exg_index > 0: + assert "end exchanges" in l + break diff --git a/flopy/mf6/mfsimbase.py b/flopy/mf6/mfsimbase.py index 1c96c23b46..264f35fdbd 100644 --- a/flopy/mf6/mfsimbase.py +++ b/flopy/mf6/mfsimbase.py @@ -2183,11 +2183,36 @@ def remove_model(self, model_name): Model name to remove from simulation """ - # Remove model + # remove model del self._models[model_name] - # TODO: Fully implement this - # Update simulation name file + # remove from solution group block + self._remove_from_all_solution_groups(model_name) + + # remove from models block + models_recarray = self.name_file.models.get_data() + if models_recarray is not None: + new_records = [] + for record in models_recarray: + if len(record) <= 2 or record[2] != model_name: + new_records.append(tuple(record)) + self.name_file.models.set_data(new_records) + + # remove from exchanges block + exch_recarray = self.name_file.exchanges.get_data() + if exch_recarray is not None: + new_records = [] + for record in exch_recarray: + model_in_record = False + if len(record) > 2: + for item in list(record)[2:]: + if item == model_name: + model_in_record = True + if not model_in_record: + new_records.append(tuple(record)) + if len(new_records) == 0: + new_records = None + self.name_file.exchanges.set_data(new_records) def is_valid(self): """ From 2fa32da4e96cff689baac333bee76eb3b7655520 Mon Sep 17 00:00:00 2001 From: wpbonelli Date: Fri, 15 Sep 2023 09:22:24 -0400 Subject: [PATCH 052/101] docs(binder): remove configuration, links, badges (#1947) * https://discourse.jupyter.org/t/mybinder-org-reducing-capacity-unstable/19163 * https://blog.jupyter.org/mybinder-org-reducing-capacity-c93ccfc6413f --- .docs/prolog.rst | 2 +- README.md | 2 -- binder/postBuild | 8 -------- scripts/update_version.py | 6 ------ 4 files changed, 1 insertion(+), 17 deletions(-) delete mode 100755 binder/postBuild diff --git a/.docs/prolog.rst b/.docs/prolog.rst index f80f86e4bb..3efa2ef7d0 100644 --- a/.docs/prolog.rst +++ b/.docs/prolog.rst @@ -5,6 +5,6 @@ .. note:: - | This page was generated from `{{ docname[:-6] }}`__. Download as :download:`a Jupyter notebook (.ipynb) <../{{ docname }}>` or :download:`Python script (.py) <../{{ docname[:-6] }}.py>` or launch interactively with Binder :raw-html:`Binder badge.` + | This page was generated from `{{ docname[:-6] }}`__. Download as :download:`a Jupyter notebook (.ipynb) <../{{ docname }}>` or :download:`Python script (.py) <../{{ docname[:-6] }}.py>`. __ https://github.com/modflowpy/flopy/blob/develop/.docs/{{ docname[:-6] }}.py \ No newline at end of file diff --git a/README.md b/README.md index 113a79949e..eace01e9f9 100644 --- a/README.md +++ b/README.md @@ -17,8 +17,6 @@ [![PyPI Status](https://img.shields.io/pypi/status/flopy.png)](https://pypi.python.org/pypi/flopy) [![PyPI Versions](https://img.shields.io/pypi/pyversions/flopy.png)](https://pypi.python.org/pypi/flopy) -[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/modflowpy/flopy.git/develop) - Introduction ----------------------------------------------- diff --git a/binder/postBuild b/binder/postBuild deleted file mode 100755 index 0d53a66c18..0000000000 --- a/binder/postBuild +++ /dev/null @@ -1,8 +0,0 @@ -#!/bin/bash - -# install modflow & related executables to the Python bindir -get-modflow :python - -# required for jupytext -# https://jupytext.readthedocs.io/en/latest/faq.html#can-i-use-jupytext-with-jupyterhub-binder-nteract-colab-saturn-or-azure -jupyter lab build diff --git a/scripts/update_version.py b/scripts/update_version.py index c7ecdd369a..19390073da 100644 --- a/scripts/update_version.py +++ b/scripts/update_version.py @@ -190,12 +190,6 @@ def update_readme_markdown( "(https://coveralls.io/github/modflowpy/" "flopy?branch=develop)" ) - elif "[Binder]" in line: - # [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/modflowpy/flopy.git/develop) - line = ( - "[![Binder](https://mybinder.org/badge_logo.svg)]" - "(https://mybinder.org/v2/gh/modflowpy/flopy.git/develop)" - ) elif "doi.org/10.5066/F7BK19FH" in line: line = get_software_citation(timestamp, version, approved) elif "Disclaimer" in line: From e20a29814afd97650d6ee2b10f0939893be64551 Mon Sep 17 00:00:00 2001 From: langevin-usgs Date: Mon, 18 Sep 2023 11:49:48 -0500 Subject: [PATCH 053/101] feat(gridutil): add function to help create DISV grid (#1952) --- autotest/test_gridutil.py | 44 ++++++++++++++- flopy/utils/gridutil.py | 115 +++++++++++++++++++++++++++++++++++++- 2 files changed, 156 insertions(+), 3 deletions(-) diff --git a/autotest/test_gridutil.py b/autotest/test_gridutil.py index 749996d0fb..c1288ff713 100644 --- a/autotest/test_gridutil.py +++ b/autotest/test_gridutil.py @@ -3,7 +3,12 @@ import numpy as np import pytest -from flopy.utils.gridutil import get_disu_kwargs, get_lni, uniform_flow_field +from flopy.utils.gridutil import ( + get_disu_kwargs, + get_disv_kwargs, + get_lni, + uniform_flow_field, +) @pytest.mark.parametrize( @@ -102,6 +107,43 @@ def test_get_disu_kwargs(nlay, nrow, ncol, delr, delc, tp, botm): # print(kwargs["nja"]) +@pytest.mark.parametrize( + "nlay, nrow, ncol, delr, delc, tp, botm", + [ + ( + 1, + 61, + 61, + np.array(61 * [50.0]), + np.array(61 * [50.0]), + -10.0, + -50.0, + ), + ( + 2, + 61, + 61, + np.array(61 * [50.0]), + np.array(61 * [50.0]), + -10.0, + [-30.0, -50.0], + ), + ], +) +def test_get_disv_kwargs(nlay, nrow, ncol, delr, delc, tp, botm): + kwargs = get_disv_kwargs( + nlay=nlay, nrow=nrow, ncol=ncol, delr=delr, delc=delc, tp=tp, botm=botm + ) + + assert kwargs["nlay"] == nlay + assert kwargs["ncpl"] == nrow * ncol + assert kwargs["nvert"] == (nrow + 1) * (ncol + 1) + + # TODO: test other properties + # print(kwargs["vertices"]) + # print(kwargs["cell2d"]) + + @pytest.mark.parametrize( "qx, qy, qz, nlay, nrow, ncol", [ diff --git a/flopy/utils/gridutil.py b/flopy/utils/gridutil.py index d9ca28c190..000af76ce4 100644 --- a/flopy/utils/gridutil.py +++ b/flopy/utils/gridutil.py @@ -6,6 +6,8 @@ import numpy as np +from .cvfdutil import get_disv_gridprops + def get_lni(ncpl, nodes) -> List[Tuple[int, int]]: """ @@ -68,7 +70,8 @@ def get_disu_kwargs( botm, ): """ - Create args needed to construct a DISU package. + Create args needed to construct a DISU package for a regular + MODFLOW grid. Parameters ---------- @@ -85,7 +88,7 @@ def get_disu_kwargs( tp : int or numpy.ndarray Top elevation(s) of cells in the model's top layer botm : numpy.ndarray - Bottom elevation(s) of all cells in the model + Bottom elevation(s) for each layer """ def get_nn(k, i, j): @@ -181,6 +184,114 @@ def get_nn(k, i, j): return kw +def get_disv_kwargs( + nlay, + nrow, + ncol, + delr, + delc, + tp, + botm, + xoff=0.0, + yoff=0.0, +): + """ + Create args needed to construct a DISV package. + + Parameters + ---------- + nlay : int + Number of layers + nrow : int + Number of rows + ncol : int + Number of columns + delr : float or numpy.ndarray + Column spacing along a row with shape (ncol) + delc : float or numpy.ndarray + Row spacing along a column with shape (nrow) + tp : float or numpy.ndarray + Top elevation(s) of cells in the model's top layer with shape (nrow, ncol) + botm : list of floats or numpy.ndarray + Bottom elevation(s) of all cells in the model with shape (nlay, nrow, ncol) + xoff : float + Value to add to all x coordinates. Optional (default = 0.) + yoff : float + Value to add to all y coordinates. Optional (default = 0.) + """ + + # validate input + ncpl = nrow * ncol + + # delr check + if np.isscalar(delr): + delr = delr * np.ones(ncol, dtype=float) + else: + assert delr.shape == (ncol,), "delr must be array with shape (ncol,)" + + # delc check + if np.isscalar(delc): + delc = delc * np.ones(nrow, dtype=float) + else: + assert delc.shape == (nrow,), "delc must be array with shape (nrow,)" + + # tp check + if np.isscalar(tp): + tp = tp * np.ones((nrow, ncol), dtype=float) + else: + assert tp.shape == ( + nrow, + ncol, + ), "tp must be scalar or array with shape (nrow, ncol)" + + # botm check + if np.isscalar(botm): + botm = botm * np.ones((nlay, nrow, ncol), dtype=float) + elif isinstance(botm, List): + assert ( + len(botm) == nlay + ), "if botm provided as a list it must have length nlay" + b = np.empty((nlay, nrow, ncol), dtype=float) + for k in range(nlay): + b[k] = botm[k] + botm = b + else: + assert botm.shape == ( + nlay, + nrow, + ncol, + ), "botm must be array with shape (nlay, nrow, ncol)" + + # build vertices + xv = np.cumsum(delr) + xv = np.array([0] + list(xv)) + ymax = delc.sum() + yv = np.cumsum(delc) + yv = ymax - np.array([0] + list(yv)) + xmg, ymg = np.meshgrid(xv, yv) + verts = np.array(list(zip(xmg.flatten(), ymg.flatten()))) + verts[:, 0] += xoff + verts[:, 1] += yoff + + # build iverts (list of vertices for each cell) + iverts = [] + for i in range(nrow): + for j in range(ncol): + # number vertices in clockwise order + iv0 = j + i * (ncol + 1) # upper left vertex + iv1 = iv0 + 1 # upper right vertex + iv3 = iv0 + ncol + 1 # lower left vertex + iv2 = iv3 + 1 # lower right vertex + iverts.append([iv0, iv1, iv2, iv3]) + kw = get_disv_gridprops(verts, iverts) + + # reshape and add top and bottom + kw["top"] = tp.reshape(ncpl) + kw["botm"] = botm.reshape(nlay, ncpl) + kw["nlay"] = nlay + return kw + + def uniform_flow_field(qx, qy, qz, shape, delr=None, delc=None, delv=None): nlay, nrow, ncol = shape From 9e9d7302cd771ed305c96fbeb406e210c0d9aea7 Mon Sep 17 00:00:00 2001 From: wpbonelli Date: Tue, 19 Sep 2023 11:45:29 -0400 Subject: [PATCH 054/101] fix(export_contours/f): support matplotlib 3.8+ (#1951) * ContourSet.get_paths() returns one path per level * path may contain multiple disconnected components * walk contour components to determine connectivity --- autotest/test_export.py | 15 +++- flopy/export/utils.py | 181 ++++++++++++++++++++++++++++++---------- 2 files changed, 153 insertions(+), 43 deletions(-) diff --git a/autotest/test_export.py b/autotest/test_export.py index afe9bdf39a..c7f84661da 100644 --- a/autotest/test_export.py +++ b/autotest/test_export.py @@ -51,7 +51,6 @@ from flopy.utils.crs import get_authority_crs from flopy.utils.geometry import Polygon - HAS_PYPROJ = has_pkg("pyproj", strict=True) if HAS_PYPROJ: import pyproj @@ -683,6 +682,13 @@ def test_export_contourf(function_tmpdir, example_data_path): len(shapes) >= 65 ), "multipolygons were skipped in contourf routine" + # debugging + # for s in shapes: + # x = [i[0] for i in s.points[:]] + # y = [i[1] for i in s.points[:]] + # plt.plot(x, y) + # plt.show() + @pytest.mark.mf6 @requires_pkg("shapefile", "shapely") @@ -712,6 +718,13 @@ def test_export_contours(function_tmpdir, example_data_path): # expect 65 with standard mpl contours (structured grids), 86 with tricontours assert len(shapes) >= 65 + # debugging + # for s in shapes: + # x = [i[0] for i in s.points[:]] + # y = [i[1] for i in s.points[:]] + # plt.plot(x, y) + # plt.show() + @pytest.mark.mf6 @requires_pkg("shapely") diff --git a/flopy/export/utils.py b/flopy/export/utils.py index e8266787a4..0277e55d43 100644 --- a/flopy/export/utils.py +++ b/flopy/export/utils.py @@ -1,8 +1,10 @@ import os +from itertools import repeat from pathlib import Path -from typing import Optional, Union +from typing import Union import numpy as np +from packaging.version import Version from ..datbase import DataInterface, DataListInterface, DataType from ..mbase import BaseModel, ModelInterface @@ -1680,6 +1682,7 @@ def export_contours( filename: Union[str, os.PathLike], contours, fieldname="level", + verbose=False, **kwargs, ): """ @@ -1693,6 +1696,8 @@ def export_contours( (object returned by matplotlib.pyplot.contour) fieldname : str gis attribute table field name + verbose : bool, optional, default False + whether to show verbose output **kwargs : key-word arguments to flopy.export.shapefile_utils.recarray2shp Returns @@ -1700,26 +1705,76 @@ def export_contours( df : dataframe of shapefile contents """ + from importlib.metadata import version + + from matplotlib.path import Path + from ..utils.geometry import LineString from .shapefile_utils import recarray2shp if not isinstance(contours, list): contours = [contours] + # Export a linestring for each contour component. + # Levels may have multiple disconnected components. geoms = [] level = [] + + # ContourSet.collections was deprecated with + # matplotlib 3.8. ContourSet is a collection + # of Paths, where each Path corresponds to a + # contour level, and may contain one or more + # (possibly disconnected) components. Before + # 3.8, iterating over ContourSet.collections + # and enumerating from get_paths() suffices, + # but post-3.8, we have to walk the segments + # to distinguish disconnected components. + mpl_ver = Version(version("matplotlib")) + for ctr in contours: - levels = ctr.levels - for i, c in enumerate(ctr.collections): - paths = c.get_paths() - geoms += [LineString(p.vertices) for p in paths] - level += list(np.ones(len(paths)) * levels[i]) + if mpl_ver < Version("3.8.0"): + levels = ctr.levels + for i, c in enumerate(ctr.collections): + paths = c.get_paths() + geoms += [LineString(p.vertices) for p in paths] + level += list(np.ones(len(paths)) * levels[i]) + else: + paths = ctr.get_paths() + for pi, path in enumerate(paths): + # skip empty paths + if path.vertices.shape[0] == 0: + continue + + # Each Path may contain multiple components + # so we unpack them as separate geometries. + lines = [] + segs = [] + for seg in path.iter_segments(): + pts, code = seg + if code == Path.MOVETO: + if len(segs) > 0: + lines.append(LineString(segs)) + segs = [] + segs.append(pts) + elif code == Path.LINETO: + segs.append(pts) + elif code == Path.CLOSEPOLY: + segs.append(pts) + segs.append(segs[0]) # add closing segment + lines.append(LineString(segs)) + segs = [] + if len(segs) > 0: + lines.append(LineString(segs)) + + level += list(np.ones(len(lines)) * ctr.levels[pi]) + geoms += lines + + if verbose: + print(f"Writing {len(level)} contour lines") - # convert the dictionary to a recarray ra = np.array(level, dtype=[(fieldname, float)]).view(np.recarray) recarray2shp(ra, geoms, filename, **kwargs) - return def export_contourf( @@ -1756,57 +1811,99 @@ def export_contourf( >>> export_contourf('myfilledcontours.shp', cs) """ + from importlib.metadata import version + + from matplotlib.path import Path + from ..utils.geometry import Polygon, is_clockwise from .shapefile_utils import recarray2shp + if not isinstance(contours, list): + contours = [contours] + + # export a polygon for each filled contour geoms = [] level = [] - if not isinstance(contours, list): - contours = [contours] + # ContourSet.collections was deprecated with + # matplotlib 3.8. ContourSet is a collection + # of Paths, where each Path corresponds to a + # contour level, and may contain one or more + # (possibly disconnected) components. Before + # 3.8, iterating over ContourSet.collections + # and enumerating from get_paths() suffices, + # but post-3.8, we have to walk the segments + # to distinguish disconnected components. + mpl_ver = Version(version("matplotlib")) - for c in contours: - levels = c.levels - for idx, col in enumerate(c.collections): - # Loop through all polygons that have the same intensity level - for contour_path in col.get_paths(): - # Create the polygon(s) for this intensity level - poly = None - for ncp, cp in enumerate(contour_path.to_polygons()): - x = cp[:, 0] - y = cp[:, 1] - verts = [(i[0], i[1]) for i in zip(x, y)] - new_shape = Polygon(verts) - - if ncp == 0: - poly = new_shape - else: - # check if this is a multipolygon by checking vertex - # order. - if is_clockwise(verts): - # Clockwise is a hole, set to interiors - if not poly.interiors: - poly.interiors = [new_shape.exterior] - else: - poly.interiors.append(new_shape.exterior) - else: - geoms.append(poly) - level.append(levels[idx]) + for ctr in contours: + if mpl_ver < Version("3.8.0"): + levels = ctr.levels + for idx, col in enumerate(ctr.collections): + for contour_path in col.get_paths(): + # Create the polygon(s) for this intensity level + poly = None + for ncp, cp in enumerate(contour_path.to_polygons()): + x = cp[:, 0] + y = cp[:, 1] + verts = [(i[0], i[1]) for i in zip(x, y)] + new_shape = Polygon(verts) + + if ncp == 0: poly = new_shape + else: + # check if this is a multipolygon by checking vertex + # order. + if is_clockwise(verts): + # Clockwise is a hole, set to interiors + if not poly.interiors: + poly.interiors = [new_shape.exterior] + else: + poly.interiors.append(new_shape.exterior) + else: + geoms.append(poly) + level.append(levels[idx]) + poly = new_shape - if poly is not None: - # store geometry object - geoms.append(poly) + if poly is not None: + # store geometry object + geoms.append(poly) level.append(levels[idx]) + else: + paths = ctr.get_paths() + for pi, path in enumerate(paths): + # skip empty paths + if path.vertices.shape[0] == 0: + continue + + polys = [] + segs = [] + for seg in path.iter_segments(): + pts, code = seg + if code == Path.MOVETO: + if len(segs) > 0: + polys.append(Polygon(segs)) + segs = [] + segs.append(pts) + elif code == Path.LINETO: + segs.append(pts) + elif code == Path.CLOSEPOLY: + segs.append(pts) + segs.append(segs[0]) # add closing segment + polys.append(Polygon(segs)) + segs = [] + if len(segs) > 0: + polys.append(Polygon(segs)) + + geoms.extend(polys) + level.extend(repeat(ctr.levels[pi], len(polys))) if verbose: print(f"Writing {len(level)} polygons") - # Create recarray ra = np.array(level, dtype=[(fieldname, float)]).view(np.recarray) recarray2shp(ra, geoms, filename, **kwargs) - return def export_array_contours( From 06eefd70bc23a43dd057f98bc879d5f2762e6078 Mon Sep 17 00:00:00 2001 From: Chris Nicol <37314969+cnicol-gwlogic@users.noreply.github.com> Date: Fri, 22 Sep 2023 22:50:30 +1000 Subject: [PATCH 055/101] Fix usgbcf (#1959) * fix(usg bcf) ksat util3d call --> util2d call Signed-off-by: Chris Nicol * doc(modflow/sfr2) add sfr2 channel_geometry_data to docstring Signed-off-by: Chris Nicol --------- Signed-off-by: Chris Nicol --- flopy/mfusg/mfusgbcf.py | 2 +- flopy/modflow/mfsfr2.py | 10 ++++++++++ 2 files changed, 11 insertions(+), 1 deletion(-) diff --git a/flopy/mfusg/mfusgbcf.py b/flopy/mfusg/mfusgbcf.py index 231e80d1e6..6bda77d7cb 100644 --- a/flopy/mfusg/mfusgbcf.py +++ b/flopy/mfusg/mfusgbcf.py @@ -219,7 +219,7 @@ def __init__( locat=self.unit_number[0], ) if not structured: - self.ksat = Util3d( + self.ksat = Util2d( model, (njag,), np.float32, diff --git a/flopy/modflow/mfsfr2.py b/flopy/modflow/mfsfr2.py index 1efe601125..c5035b9c48 100644 --- a/flopy/modflow/mfsfr2.py +++ b/flopy/modflow/mfsfr2.py @@ -182,6 +182,16 @@ class ModflowSfr2(Package): Numpy record array of length equal to nss, with columns for each variable entered in items 6a, 6b and 6c (see SFR package input instructions). Segment numbers are one-based. + channel_geometry_data : dict of dicts containing lists + Optional. Outer dictionary keyed by stress period (0-based); inner + dictionaries keyed by segment number (1-based), for which 8-point channel + cross section geometries are desired. Inner dict values are lists of shape + (2,8) - with the first dimension referring to two lists: one of 8 XCPT + values, and the other of 8 ZCPT values. + Example structure: {kper: {segment: [[xcpt1...xcpt8],[zcpt1...zcpt8]]}}. + Note that for these to be applied, the user must also specify an icalc + value of 2 for each corresponding segment in segment_data for the + relevant stress periods. dataset_5 : dict of lists Optional; will be built automatically from segment_data unless specified. Dict of lists, with key for each stress period. Each list From 1aa2c459bff9644531bea61267b7164766b22d10 Mon Sep 17 00:00:00 2001 From: wpbonelli Date: Fri, 22 Sep 2023 12:26:09 -0400 Subject: [PATCH 056/101] test: split plot tests into more narrowly-scoped files (#1949) --- autotest/test_model_dot_plot.py | 76 +++ autotest/test_plot.py | 735 -------------------------- autotest/test_plot_cross_section.py | 183 +++++++ autotest/test_plot_map_view.py | 200 +++++++ autotest/test_plot_particle_tracks.py | 174 ++++++ autotest/test_plot_quasi3d.py | 154 ++++++ autotest/test_shapefile_utils.py | 4 +- 7 files changed, 789 insertions(+), 737 deletions(-) create mode 100644 autotest/test_model_dot_plot.py delete mode 100644 autotest/test_plot.py create mode 100644 autotest/test_plot_cross_section.py create mode 100644 autotest/test_plot_map_view.py create mode 100644 autotest/test_plot_particle_tracks.py create mode 100644 autotest/test_plot_quasi3d.py diff --git a/autotest/test_model_dot_plot.py b/autotest/test_model_dot_plot.py new file mode 100644 index 0000000000..da8cac18d4 --- /dev/null +++ b/autotest/test_model_dot_plot.py @@ -0,0 +1,76 @@ +from os import listdir +from os.path import join + +import pytest +from flaky import flaky +from matplotlib import rcParams + +from flopy.mf6 import MFSimulation +from flopy.modflow import Modflow + + +@pytest.mark.mf6 +def test_vertex_model_dot_plot(example_data_path): + rcParams["figure.max_open_warning"] = 36 + + # load up the vertex example problem + sim = MFSimulation.load( + sim_ws=example_data_path / "mf6" / "test003_gwftri_disv" + ) + disv_ml = sim.get_model("gwf_1") + ax = disv_ml.plot() + assert isinstance(ax, list) + assert len(ax) == 36 + + +# occasional _tkinter.TclError: Can't find a usable tk.tcl (or init.tcl) +# similar: https://github.com/microsoft/azure-pipelines-tasks/issues/16426 +@flaky +def test_model_dot_plot(function_tmpdir, example_data_path): + loadpth = example_data_path / "mf2005_test" + ml = Modflow.load("ibs2k.nam", "mf2k", model_ws=loadpth, check=False) + ax = ml.plot() + assert isinstance(ax, list), "ml.plot() ax is is not a list" + assert len(ax) == 18, f"number of axes ({len(ax)}) is not equal to 18" + + +def test_dataset_dot_plot(function_tmpdir, example_data_path): + loadpth = example_data_path / "mf2005_test" + ml = Modflow.load("ibs2k.nam", "mf2k", model_ws=loadpth, check=False) + + # plot specific dataset + ax = ml.bcf6.hy.plot() + assert isinstance(ax, list), "ml.bcf6.hy.plot() ax is is not a list" + assert len(ax) == 2, f"number of hy axes ({len(ax)}) is not equal to 2" + + +def test_dataset_dot_plot_nlay_ne_plottable( + function_tmpdir, example_data_path +): + import matplotlib.pyplot as plt + + loadpth = example_data_path / "mf2005_test" + ml = Modflow.load("ibs2k.nam", "mf2k", model_ws=loadpth, check=False) + # special case where nlay != plottable + ax = ml.bcf6.vcont.plot() + assert isinstance( + ax, plt.Axes + ), "ml.bcf6.vcont.plot() ax is is not of type plt.Axes" + + +def test_model_dot_plot_export(function_tmpdir, example_data_path): + loadpth = example_data_path / "mf2005_test" + ml = Modflow.load("ibs2k.nam", "mf2k", model_ws=loadpth, check=False) + + fh = join(function_tmpdir, "ibs2k") + ml.plot(mflay=0, filename_base=fh, file_extension="png") + files = [f for f in listdir(function_tmpdir) if f.endswith(".png")] + if len(files) < 10: + raise AssertionError( + "ml.plot did not properly export all supported data types" + ) + + for f in files: + t = f.split("_") + if len(t) < 3: + raise AssertionError("Plot filenames not written correctly") diff --git a/autotest/test_plot.py b/autotest/test_plot.py deleted file mode 100644 index 58e6726727..0000000000 --- a/autotest/test_plot.py +++ /dev/null @@ -1,735 +0,0 @@ -import os - -import numpy as np -import pandas as pd -import pytest -from flaky import flaky -from matplotlib import pyplot as plt -from matplotlib import rcParams -from matplotlib.collections import ( - LineCollection, - PatchCollection, - PathCollection, - QuadMesh, -) -from modflow_devtools.markers import requires_exe, requires_pkg - -import flopy -from flopy.discretization import StructuredGrid -from flopy.mf6 import MFSimulation -from flopy.modflow import ( - Modflow, - ModflowBas, - ModflowDis, - ModflowLpf, - ModflowOc, - ModflowPcg, - ModflowWel, -) -from flopy.modpath import Modpath6, Modpath6Bas -from flopy.plot import PlotCrossSection, PlotMapView -from flopy.utils import CellBudgetFile, EndpointFile, HeadFile, PathlineFile - - -@requires_pkg("shapely") -def test_map_view(): - m = flopy.modflow.Modflow(rotation=20.0) - dis = flopy.modflow.ModflowDis( - m, nlay=1, nrow=40, ncol=20, delr=250.0, delc=250.0, top=10, botm=0 - ) - # transformation assigned by arguments - xll, yll, rotation = 500000.0, 2934000.0, 45.0 - - def check_vertices(): - xllp, yllp = pc._paths[0].vertices[0] - assert np.abs(xllp - xll) < 1e-6 - assert np.abs(yllp - yll) < 1e-6 - - m.modelgrid.set_coord_info(xoff=xll, yoff=yll, angrot=rotation) - modelmap = flopy.plot.PlotMapView(model=m) - pc = modelmap.plot_grid() - check_vertices() - - modelmap = flopy.plot.PlotMapView(modelgrid=m.modelgrid) - pc = modelmap.plot_grid() - check_vertices() - - mf = flopy.modflow.Modflow() - - # Model domain and grid definition - dis = flopy.modflow.ModflowDis( - mf, - nlay=1, - nrow=10, - ncol=20, - delr=1.0, - delc=1.0, - ) - xul, yul = 100.0, 210.0 - mg = mf.modelgrid - mf.modelgrid.set_coord_info( - xoff=mg._xul_to_xll(xul, 0.0), yoff=mg._yul_to_yll(yul, 0.0) - ) - verts = [[101.0, 201.0], [119.0, 209.0]] - modelxsect = flopy.plot.PlotCrossSection(model=mf, line={"line": verts}) - patchcollection = modelxsect.plot_grid() - - -@pytest.mark.mf6 -@pytest.mark.xfail(reason="sometimes get wrong collection type") -def test_map_view_bc_gwfs_disv(example_data_path): - mpath = example_data_path / "mf6" / "test003_gwfs_disv" - sim = MFSimulation.load(sim_ws=mpath) - ml6 = sim.get_model("gwf_1") - ml6.modelgrid.set_coord_info(angrot=-14) - mapview = flopy.plot.PlotMapView(model=ml6) - mapview.plot_bc("CHD") - ax = mapview.ax - - if len(ax.collections) == 0: - raise AssertionError("Boundary condition was not drawn") - - for col in ax.collections: - assert isinstance( - col, (QuadMesh, PathCollection) - ), f"Unexpected collection type: {type(col)}" - - -@pytest.mark.mf6 -@pytest.mark.xfail(reason="sometimes get wrong collection type") -def test_map_view_bc_lake2tr(example_data_path): - mpath = example_data_path / "mf6" / "test045_lake2tr" - sim = MFSimulation.load(sim_ws=mpath) - ml6 = sim.get_model("lakeex2a") - mapview = flopy.plot.PlotMapView(model=ml6) - mapview.plot_bc("LAK") - mapview.plot_bc("SFR") - - ax = mapview.ax - if len(ax.collections) == 0: - raise AssertionError("Boundary condition was not drawn") - - for col in ax.collections: - assert isinstance( - col, (QuadMesh, PathCollection) - ), f"Unexpected collection type: {type(col)}" - - -@pytest.mark.mf6 -@pytest.mark.xfail(reason="sometimes get wrong collection type") -def test_map_view_bc_2models_mvr(example_data_path): - mpath = example_data_path / "mf6" / "test006_2models_mvr" - sim = MFSimulation.load(sim_ws=mpath) - ml6 = sim.get_model("parent") - ml6c = sim.get_model("child") - ml6c.modelgrid.set_coord_info(xoff=700, yoff=0, angrot=0) - - mapview = flopy.plot.PlotMapView(model=ml6) - mapview.plot_bc("MAW") - ax = mapview.ax - - assert len(ax.collections) > 0, "Boundary condition was not drawn" - - mapview2 = flopy.plot.PlotMapView(model=ml6c, ax=mapview.ax) - mapview2.plot_bc("MAW") - ax = mapview2.ax - - assert len(ax.collections) > 0, "Boundary condition was not drawn" - - for col in ax.collections: - assert isinstance( - col, (QuadMesh, PathCollection) - ), f"Unexpected collection type: {type(col)}" - - -@pytest.mark.mf6 -@pytest.mark.xfail(reason="sometimes get wrong collection type") -def test_map_view_bc_UZF_3lay(example_data_path): - mpath = example_data_path / "mf6" / "test001e_UZF_3lay" - sim = MFSimulation.load(sim_ws=mpath) - ml6 = sim.get_model("gwf_1") - - mapview = flopy.plot.PlotMapView(model=ml6) - mapview.plot_bc("UZF") - ax = mapview.ax - - if len(ax.collections) == 0: - raise AssertionError("Boundary condition was not drawn") - - for col in ax.collections: - assert isinstance( - col, (QuadMesh, PathCollection) - ), f"Unexpected collection type: {type(col)}" - - -@pytest.mark.mf6 -@pytest.mark.xfail( - reason="sometimes get LineCollections instead of PatchCollections" -) -def test_cross_section_bc_gwfs_disv(example_data_path): - mpath = example_data_path / "mf6" / "test003_gwfs_disv" - sim = MFSimulation.load(sim_ws=mpath) - ml6 = sim.get_model("gwf_1") - xc = flopy.plot.PlotCrossSection(ml6, line={"line": ([0, 5.5], [10, 5.5])}) - xc.plot_bc("CHD") - ax = xc.ax - - assert len(ax.collections) != 0, "Boundary condition was not drawn" - - for col in ax.collections: - assert isinstance( - col, PatchCollection - ), f"Unexpected collection type: {type(col)}" - - -@pytest.mark.mf6 -@pytest.mark.xfail( - reason="sometimes get LineCollections instead of PatchCollections" -) -def test_cross_section_bc_lake2tr(example_data_path): - mpath = example_data_path / "mf6" / "test045_lake2tr" - sim = MFSimulation.load(sim_ws=mpath) - ml6 = sim.get_model("lakeex2a") - xc = flopy.plot.PlotCrossSection(ml6, line={"row": 10}) - xc.plot_bc("LAK") - xc.plot_bc("SFR") - - ax = xc.ax - assert len(ax.collections) != 0, "Boundary condition was not drawn" - - for col in ax.collections: - assert isinstance( - col, PatchCollection - ), f"Unexpected collection type: {type(col)}" - - -@pytest.mark.mf6 -@pytest.mark.xfail( - reason="sometimes get LineCollections instead of PatchCollections" -) -def test_cross_section_bc_2models_mvr(example_data_path): - mpath = example_data_path / "mf6" / "test006_2models_mvr" - sim = MFSimulation.load(sim_ws=mpath) - ml6 = sim.get_model("parent") - xc = flopy.plot.PlotCrossSection(ml6, line={"column": 1}) - xc.plot_bc("MAW") - - ax = xc.ax - assert len(ax.collections) > 0, "Boundary condition was not drawn" - - for col in ax.collections: - assert isinstance( - col, PatchCollection - ), f"Unexpected collection type: {type(col)}" - - -@pytest.mark.mf6 -@pytest.mark.xfail( - reason="sometimes get LineCollections instead of PatchCollections" -) -def test_cross_section_bc_UZF_3lay(example_data_path): - mpath = example_data_path / "mf6" / "test001e_UZF_3lay" - sim = MFSimulation.load(sim_ws=mpath) - ml6 = sim.get_model("gwf_1") - - xc = flopy.plot.PlotCrossSection(ml6, line={"row": 0}) - xc.plot_bc("UZF") - - ax = xc.ax - assert len(ax.collections) != 0, "Boundary condition was not drawn" - - for col in ax.collections: - assert isinstance( - col, PatchCollection - ), f"Unexpected collection type: {type(col)}" - - -def test_map_view_contour(function_tmpdir): - arr = np.random.rand(10, 10) * 100 - arr[-1, :] = np.nan - delc = np.array([10] * 10, dtype=float) - delr = np.array([8] * 10, dtype=float) - top = np.ones((10, 10), dtype=float) - botm = np.ones((3, 10, 10), dtype=float) - botm[0] = 0.75 - botm[1] = 0.5 - botm[2] = 0.25 - idomain = np.ones((3, 10, 10)) - idomain[0, 0, :] = 0 - vmin = np.nanmin(arr) - vmax = np.nanmax(arr) - levels = np.linspace(vmin, vmax, 7) - - grid = StructuredGrid( - delc=delc, - delr=delr, - top=top, - botm=botm, - idomain=idomain, - lenuni=1, - nlay=3, - nrow=10, - ncol=10, - ) - - pmv = PlotMapView(modelgrid=grid, layer=0) - contours = pmv.contour_array(a=arr) - plt.savefig(function_tmpdir / "map_view_contour.png") - - # if we ever revert from standard contours to tricontours, restore this nan check - # for ix, lev in enumerate(contours.levels): - # if not np.allclose(lev, levels[ix]): - # raise AssertionError("TriContour NaN catch Failed") - - -@pytest.mark.mf6 -def test_vertex_model_dot_plot(example_data_path): - rcParams["figure.max_open_warning"] = 36 - - # load up the vertex example problem - sim = MFSimulation.load( - sim_ws=example_data_path / "mf6" / "test003_gwftri_disv" - ) - disv_ml = sim.get_model("gwf_1") - ax = disv_ml.plot() - assert isinstance(ax, list) - assert len(ax) == 36 - - -# occasional _tkinter.TclError: Can't find a usable tk.tcl (or init.tcl) -# similar: https://github.com/microsoft/azure-pipelines-tasks/issues/16426 -@flaky -def test_model_dot_plot(function_tmpdir, example_data_path): - loadpth = example_data_path / "mf2005_test" - ml = flopy.modflow.Modflow.load( - "ibs2k.nam", "mf2k", model_ws=loadpth, check=False - ) - ax = ml.plot() - assert isinstance(ax, list), "ml.plot() ax is is not a list" - assert len(ax) == 18, f"number of axes ({len(ax)}) is not equal to 18" - - -def test_dataset_dot_plot(function_tmpdir, example_data_path): - loadpth = example_data_path / "mf2005_test" - ml = flopy.modflow.Modflow.load( - "ibs2k.nam", "mf2k", model_ws=loadpth, check=False - ) - - # plot specific dataset - ax = ml.bcf6.hy.plot() - assert isinstance(ax, list), "ml.bcf6.hy.plot() ax is is not a list" - assert len(ax) == 2, f"number of hy axes ({len(ax)}) is not equal to 2" - - -def test_dataset_dot_plot_nlay_ne_plottable( - function_tmpdir, example_data_path -): - import matplotlib.pyplot as plt - - loadpth = example_data_path / "mf2005_test" - ml = flopy.modflow.Modflow.load( - "ibs2k.nam", "mf2k", model_ws=loadpth, check=False - ) - # special case where nlay != plottable - ax = ml.bcf6.vcont.plot() - assert isinstance( - ax, plt.Axes - ), "ml.bcf6.vcont.plot() ax is is not of type plt.Axes" - - -def test_model_dot_plot_export(function_tmpdir, example_data_path): - loadpth = example_data_path / "mf2005_test" - ml = flopy.modflow.Modflow.load( - "ibs2k.nam", "mf2k", model_ws=loadpth, check=False - ) - - fh = os.path.join(function_tmpdir, "ibs2k") - ml.plot(mflay=0, filename_base=fh, file_extension="png") - files = [f for f in os.listdir(function_tmpdir) if f.endswith(".png")] - if len(files) < 10: - raise AssertionError( - "ml.plot did not properly export all supported data types" - ) - - for f in files: - t = f.split("_") - if len(t) < 3: - raise AssertionError("Plot filenames not written correctly") - - -@pytest.fixture -def modpath_model(function_tmpdir, example_data_path): - # test with multi-layer example - load_ws = example_data_path / "mp6" - - ml = Modflow.load("EXAMPLE.nam", model_ws=load_ws, exe_name="mf2005") - ml.change_model_ws(function_tmpdir) - ml.write_input() - ml.run_model() - - mp = Modpath6( - modelname="ex6", - exe_name="mp6", - modflowmodel=ml, - model_ws=function_tmpdir, - ) - - mpb = Modpath6Bas( - mp, hdry=ml.lpf.hdry, laytyp=ml.lpf.laytyp, ibound=1, prsity=0.1 - ) - - sim = mp.create_mpsim( - trackdir="forward", - simtype="pathline", - packages="RCH", - start_time=(2, 0, 1.0), - ) - return ml, mp, sim - - -@requires_exe("mf2005", "mp6") -def test_plot_map_view_mp6_plot_pathline(modpath_model): - ml, mp, sim = modpath_model - mp.write_input() - mp.run_model(silent=False) - - pthobj = PathlineFile(os.path.join(mp.model_ws, "ex6.mppth")) - well_pathlines = pthobj.get_destination_pathline_data( - dest_cells=[(4, 12, 12)] - ) - - def test_plot(pl): - mx = PlotMapView(model=ml) - mx.plot_grid() - mx.plot_bc("WEL", kper=2, color="blue") - pth = mx.plot_pathline(pl, colors="red") - # plt.show() - assert isinstance(pth, LineCollection) - assert len(pth._paths) == 114 - - # support pathlines as list of recarrays - test_plot(well_pathlines) - - # support pathlines as list of dataframes - test_plot([pd.DataFrame(pl) for pl in well_pathlines]) - - # support pathlines as single recarray - test_plot(np.concatenate(well_pathlines)) - - # support pathlines as single dataframe - test_plot(pd.DataFrame(np.concatenate(well_pathlines))) - - -@requires_exe("mf2005", "mp6") -def test_plot_cross_section_mp6_plot_pathline(modpath_model): - ml, mp, sim = modpath_model - mp.write_input() - mp.run_model(silent=False) - - pthobj = PathlineFile(os.path.join(mp.model_ws, "ex6.mppth")) - well_pathlines = pthobj.get_destination_pathline_data( - dest_cells=[(4, 12, 12)] - ) - - def test_plot(pl): - mx = PlotCrossSection(model=ml, line={"row": 4}) - mx.plot_bc("WEL", kper=2, color="blue") - pth = mx.plot_pathline(pl, method="cell", colors="red") - assert isinstance(pth, LineCollection) - assert len(pth._paths) == 6 - - # support pathlines as list of recarrays - test_plot(well_pathlines) - - # support pathlines as list of dataframes - test_plot([pd.DataFrame(pl) for pl in well_pathlines]) - - # support pathlines as single recarray - test_plot(np.concatenate(well_pathlines)) - - # support pathlines as single dataframe - test_plot(pd.DataFrame(np.concatenate(well_pathlines))) - - -@requires_exe("mf2005", "mp6") -def test_plot_map_view_mp6_endpoint(modpath_model): - ml, mp, sim = modpath_model - mp.write_input() - mp.run_model(silent=False) - - pthobj = EndpointFile(os.path.join(mp.model_ws, "ex6.mpend")) - endpts = pthobj.get_alldata() - - # support endpoints as recarray - assert isinstance(endpts, np.recarray) - mv = PlotMapView(model=ml) - mv.plot_bc("WEL", kper=2, color="blue") - ep = mv.plot_endpoint(endpts, direction="ending") - # plt.show() - assert isinstance(ep, PathCollection) - - # support endpoints as dataframe - mv = PlotMapView(model=ml) - mv.plot_bc("WEL", kper=2, color="blue") - ep = mv.plot_endpoint(pd.DataFrame(endpts), direction="ending") - # plt.show() - assert isinstance(ep, PathCollection) - - # test various possibilities for endpoint color configuration. - # first, color kwarg as scalar - mv = PlotMapView(model=ml) - mv.plot_bc("WEL", kper=2, color="blue") - ep = mv.plot_endpoint(endpts, direction="ending", color="red") - # plt.show() - assert isinstance(ep, PathCollection) - - # c kwarg as array - mv = PlotMapView(model=ml) - mv.plot_bc("WEL", kper=2, color="blue") - ep = mv.plot_endpoint( - endpts, - direction="ending", - c=np.random.rand(625) * -1000, - cmap="viridis", - ) - # plt.show() - assert isinstance(ep, PathCollection) - - # colorbar: color by time to termination - mv = PlotMapView(model=ml) - mv.plot_bc("WEL", kper=2, color="blue") - ep = mv.plot_endpoint( - endpts, direction="ending", shrink=0.5, colorbar=True - ) - # plt.show() - assert isinstance(ep, PathCollection) - - # if both color and c are provided, c takes precedence - mv = PlotMapView(model=ml) - mv.plot_bc("WEL", kper=2, color="blue") - ep = mv.plot_endpoint( - endpts, - direction="ending", - color="red", - c=np.random.rand(625) * -1000, - cmap="viridis", - ) - # plt.show() - assert isinstance(ep, PathCollection) - - -@pytest.fixture -def quasi3d_model(function_tmpdir): - mf = Modflow("model_mf", model_ws=function_tmpdir, exe_name="mf2005") - - # Model domain and grid definition - Lx = 1000.0 - Ly = 1000.0 - ztop = 0.0 - zbot = -30.0 - nlay = 3 - nrow = 10 - ncol = 10 - delr = Lx / ncol - delc = Ly / nrow - laycbd = [0] * (nlay) - laycbd[0] = 1 - botm = np.linspace(ztop, zbot, nlay + np.sum(laycbd) + 1)[1:] - - # Create the discretization object - ModflowDis( - mf, - nlay, - nrow, - ncol, - delr=delr, - delc=delc, - top=ztop, - botm=botm, - laycbd=laycbd, - ) - - # Variables for the BAS package - ibound = np.ones((nlay, nrow, ncol), dtype=np.int32) - ibound[:, :, 0] = -1 - ibound[:, :, -1] = -1 - strt = np.ones((nlay, nrow, ncol), dtype=np.float32) - strt[:, :, 0] = 10.0 - strt[:, :, -1] = 0.0 - ModflowBas(mf, ibound=ibound, strt=strt) - - # Add LPF package to the MODFLOW model - ModflowLpf(mf, hk=10.0, vka=10.0, ipakcb=53, vkcb=10) - - # add a well - row = int((nrow - 1) / 2) - col = int((ncol - 1) / 2) - spd = {0: [[1, row, col, -1000]]} - ModflowWel(mf, stress_period_data=spd) - - # Add OC package to the MODFLOW model - spd = {(0, 0): ["save head", "save budget"]} - ModflowOc(mf, stress_period_data=spd, compact=True) - - # Add PCG package to the MODFLOW model - ModflowPcg(mf) - - # Write the MODFLOW model input files - mf.write_input() - - # Run the MODFLOW model - success, buff = mf.run_model() - - assert success, "test_plotting_with_quasi3d_layers() failed" - - return mf - - -@requires_exe("mf2005") -def test_map_plot_with_quasi3d_layers(quasi3d_model): - # read output - hf = HeadFile( - os.path.join(quasi3d_model.model_ws, f"{quasi3d_model.name}.hds") - ) - head = hf.get_data(totim=1.0) - cbb = CellBudgetFile( - os.path.join(quasi3d_model.model_ws, f"{quasi3d_model.name}.cbc") - ) - frf = cbb.get_data(text="FLOW RIGHT FACE", totim=1.0)[0] - fff = cbb.get_data(text="FLOW FRONT FACE", totim=1.0)[0] - flf = cbb.get_data(text="FLOW LOWER FACE", totim=1.0)[0] - - # plot a map - plt.figure() - mv = PlotMapView(model=quasi3d_model, layer=1) - mv.plot_grid() - mv.plot_array(head) - mv.contour_array(head) - mv.plot_ibound() - mv.plot_bc("wel") - mv.plot_vector(frf, fff) - plt.savefig(os.path.join(quasi3d_model.model_ws, "plt01.png")) - - -@requires_exe("mf2005") -def test_cross_section_with_quasi3d_layers(quasi3d_model): - # read output - hf = HeadFile( - os.path.join(quasi3d_model.model_ws, f"{quasi3d_model.name}.hds") - ) - head = hf.get_data(totim=1.0) - cbb = CellBudgetFile( - os.path.join(quasi3d_model.model_ws, f"{quasi3d_model.name}.cbc") - ) - frf = cbb.get_data(text="FLOW RIGHT FACE", totim=1.0)[0] - fff = cbb.get_data(text="FLOW FRONT FACE", totim=1.0)[0] - flf = cbb.get_data(text="FLOW LOWER FACE", totim=1.0)[0] - - # plot a cross-section - plt.figure() - cs = PlotCrossSection( - model=quasi3d_model, - line={"row": int((quasi3d_model.modelgrid.nrow - 1) / 2)}, - ) - cs.plot_grid() - cs.plot_array(head) - cs.contour_array(head) - cs.plot_ibound() - cs.plot_bc("wel") - cs.plot_vector(frf, fff, flf, head=head) - plt.savefig(os.path.join(quasi3d_model.model_ws, "plt02.png")) - plt.close() - - -def structured_square_grid(side: int = 10, thick: int = 10): - """ - Creates a basic 1-layer structured grid with the given thickness and number of cells per side - Parameters - ---------- - side : The number of cells per side - thick : The thickness of the grid's single layer - Returns - ------- - A single-layer StructuredGrid of the given size and thickness - """ - - from flopy.discretization.structuredgrid import StructuredGrid - - delr = np.ones(side) - delc = np.ones(side) - top = np.ones((side, side)) * thick - botm = np.ones((side, side)) * (top - thick).reshape(1, side, side) - return StructuredGrid(delr=delr, delc=delc, top=top, botm=botm) - - -@requires_pkg("shapely") -@pytest.mark.parametrize( - "line", - [(), [], (()), [[]], (0, 0), [0, 0], [[0, 0]]], -) -def test_cross_section_invalid_lines_raise_error(line): - grid = structured_square_grid(side=10) - with pytest.raises(ValueError): - flopy.plot.PlotCrossSection(modelgrid=grid, line={"line": line}) - - -@requires_pkg("shapely") -@pytest.mark.parametrize( - "line", - [ - # diagonal - [(0, 0), (10, 10)], - ([0, 0], [10, 10]), - # horizontal - ([0, 5.5], [10, 5.5]), - [(0, 5.5), (10, 5.5)], - # vertical - [(5.5, 0), (5.5, 10)], - ([5.5, 0], [5.5, 10]), - # multiple segments - [(0, 0), (4, 6), (10, 10)], - ([0, 0], [4, 6], [10, 10]), - ], -) -def test_cross_section_valid_line_representations(line): - from shapely.geometry import LineString as SLS - - from flopy.utils.geometry import LineString as FLS - - grid = structured_square_grid(side=10) - - fls = FLS(line) - sls = SLS(line) - - # use raw, flopy.utils.geometry and shapely.geometry representations - lxc = flopy.plot.PlotCrossSection(modelgrid=grid, line={"line": line}) - fxc = flopy.plot.PlotCrossSection(modelgrid=grid, line={"line": fls}) - sxc = flopy.plot.PlotCrossSection(modelgrid=grid, line={"line": sls}) - - # make sure parsed points are identical for all line representations - assert np.allclose(lxc.pts, fxc.pts) and np.allclose(lxc.pts, sxc.pts) - assert ( - set(lxc.xypts.keys()) == set(fxc.xypts.keys()) == set(sxc.xypts.keys()) - ) - for k in lxc.xypts.keys(): - assert np.allclose(lxc.xypts[k], fxc.xypts[k]) and np.allclose( - lxc.xypts[k], sxc.xypts[k] - ) - - -@pytest.mark.parametrize( - "line", - [ - 0, - [0], - [0, 0], - (0, 0), - [(0, 0)], - ([0, 0]), - ], -) -@requires_pkg("shapely", "geojson") -def test_cross_section_invalid_line_representations_fail(line): - grid = structured_square_grid(side=10) - with pytest.raises(ValueError): - flopy.plot.PlotCrossSection(modelgrid=grid, line={"line": line}) diff --git a/autotest/test_plot_cross_section.py b/autotest/test_plot_cross_section.py new file mode 100644 index 0000000000..2a4605f945 --- /dev/null +++ b/autotest/test_plot_cross_section.py @@ -0,0 +1,183 @@ +import numpy as np +import pytest +from matplotlib.collections import PatchCollection +from modflow_devtools.markers import requires_pkg + +import flopy +from flopy.mf6 import MFSimulation + + +@pytest.mark.mf6 +@pytest.mark.xfail( + reason="sometimes get LineCollections instead of PatchCollections" +) +def test_cross_section_bc_gwfs_disv(example_data_path): + mpath = example_data_path / "mf6" / "test003_gwfs_disv" + sim = MFSimulation.load(sim_ws=mpath) + ml6 = sim.get_model("gwf_1") + xc = flopy.plot.PlotCrossSection(ml6, line={"line": ([0, 5.5], [10, 5.5])}) + xc.plot_bc("CHD") + ax = xc.ax + + assert len(ax.collections) != 0, "Boundary condition was not drawn" + + for col in ax.collections: + assert isinstance( + col, PatchCollection + ), f"Unexpected collection type: {type(col)}" + + +@pytest.mark.mf6 +@pytest.mark.xfail( + reason="sometimes get LineCollections instead of PatchCollections" +) +def test_cross_section_bc_lake2tr(example_data_path): + mpath = example_data_path / "mf6" / "test045_lake2tr" + sim = MFSimulation.load(sim_ws=mpath) + ml6 = sim.get_model("lakeex2a") + xc = flopy.plot.PlotCrossSection(ml6, line={"row": 10}) + xc.plot_bc("LAK") + xc.plot_bc("SFR") + + ax = xc.ax + assert len(ax.collections) != 0, "Boundary condition was not drawn" + + for col in ax.collections: + assert isinstance( + col, PatchCollection + ), f"Unexpected collection type: {type(col)}" + + +@pytest.mark.mf6 +@pytest.mark.xfail( + reason="sometimes get LineCollections instead of PatchCollections" +) +def test_cross_section_bc_2models_mvr(example_data_path): + mpath = example_data_path / "mf6" / "test006_2models_mvr" + sim = MFSimulation.load(sim_ws=mpath) + ml6 = sim.get_model("parent") + xc = flopy.plot.PlotCrossSection(ml6, line={"column": 1}) + xc.plot_bc("MAW") + + ax = xc.ax + assert len(ax.collections) > 0, "Boundary condition was not drawn" + + for col in ax.collections: + assert isinstance( + col, PatchCollection + ), f"Unexpected collection type: {type(col)}" + + +@pytest.mark.mf6 +@pytest.mark.xfail( + reason="sometimes get LineCollections instead of PatchCollections" +) +def test_cross_section_bc_UZF_3lay(example_data_path): + mpath = example_data_path / "mf6" / "test001e_UZF_3lay" + sim = MFSimulation.load(sim_ws=mpath) + ml6 = sim.get_model("gwf_1") + + xc = flopy.plot.PlotCrossSection(ml6, line={"row": 0}) + xc.plot_bc("UZF") + + ax = xc.ax + assert len(ax.collections) != 0, "Boundary condition was not drawn" + + for col in ax.collections: + assert isinstance( + col, PatchCollection + ), f"Unexpected collection type: {type(col)}" + + +def structured_square_grid(side: int = 10, thick: int = 10): + """ + Creates a basic 1-layer structured grid with the given thickness and number of cells per side + Parameters + ---------- + side : The number of cells per side + thick : The thickness of the grid's single layer + Returns + ------- + A single-layer StructuredGrid of the given size and thickness + """ + + from flopy.discretization.structuredgrid import StructuredGrid + + delr = np.ones(side) + delc = np.ones(side) + top = np.ones((side, side)) * thick + botm = np.ones((side, side)) * (top - thick).reshape(1, side, side) + return StructuredGrid(delr=delr, delc=delc, top=top, botm=botm) + + +@requires_pkg("shapely") +@pytest.mark.parametrize( + "line", + [(), [], (()), [[]], (0, 0), [0, 0], [[0, 0]]], +) +def test_cross_section_invalid_lines_raise_error(line): + grid = structured_square_grid(side=10) + with pytest.raises(ValueError): + flopy.plot.PlotCrossSection(modelgrid=grid, line={"line": line}) + + +@requires_pkg("shapely") +@pytest.mark.parametrize( + "line", + [ + # diagonal + [(0, 0), (10, 10)], + ([0, 0], [10, 10]), + # horizontal + ([0, 5.5], [10, 5.5]), + [(0, 5.5), (10, 5.5)], + # vertical + [(5.5, 0), (5.5, 10)], + ([5.5, 0], [5.5, 10]), + # multiple segments + [(0, 0), (4, 6), (10, 10)], + ([0, 0], [4, 6], [10, 10]), + ], +) +def test_cross_section_valid_line_representations(line): + from shapely.geometry import LineString as SLS + + from flopy.utils.geometry import LineString as FLS + + grid = structured_square_grid(side=10) + + fls = FLS(line) + sls = SLS(line) + + # use raw, flopy.utils.geometry and shapely.geometry representations + lxc = flopy.plot.PlotCrossSection(modelgrid=grid, line={"line": line}) + fxc = flopy.plot.PlotCrossSection(modelgrid=grid, line={"line": fls}) + sxc = flopy.plot.PlotCrossSection(modelgrid=grid, line={"line": sls}) + + # make sure parsed points are identical for all line representations + assert np.allclose(lxc.pts, fxc.pts) and np.allclose(lxc.pts, sxc.pts) + assert ( + set(lxc.xypts.keys()) == set(fxc.xypts.keys()) == set(sxc.xypts.keys()) + ) + for k in lxc.xypts.keys(): + assert np.allclose(lxc.xypts[k], fxc.xypts[k]) and np.allclose( + lxc.xypts[k], sxc.xypts[k] + ) + + +@pytest.mark.parametrize( + "line", + [ + 0, + [0], + [0, 0], + (0, 0), + [(0, 0)], + ([0, 0]), + ], +) +@requires_pkg("shapely", "geojson") +def test_cross_section_invalid_line_representations_fail(line): + grid = structured_square_grid(side=10) + with pytest.raises(ValueError): + flopy.plot.PlotCrossSection(modelgrid=grid, line={"line": line}) diff --git a/autotest/test_plot_map_view.py b/autotest/test_plot_map_view.py new file mode 100644 index 0000000000..7c0aa8470b --- /dev/null +++ b/autotest/test_plot_map_view.py @@ -0,0 +1,200 @@ +import os + +import numpy as np +import pandas as pd +import pytest +from flaky import flaky +from matplotlib import pyplot as plt +from matplotlib import rcParams +from matplotlib.collections import ( + LineCollection, + PatchCollection, + PathCollection, + QuadMesh, +) +from modflow_devtools.markers import requires_exe, requires_pkg + +import flopy +from flopy.discretization import StructuredGrid +from flopy.mf6 import MFSimulation +from flopy.modflow import ( + Modflow, + ModflowBas, + ModflowDis, + ModflowLpf, + ModflowOc, + ModflowPcg, + ModflowWel, +) +from flopy.modpath import Modpath6, Modpath6Bas +from flopy.plot import PlotCrossSection, PlotMapView +from flopy.utils import CellBudgetFile, EndpointFile, HeadFile, PathlineFile + + +@requires_pkg("shapely") +def test_map_view(): + m = flopy.modflow.Modflow(rotation=20.0) + dis = flopy.modflow.ModflowDis( + m, nlay=1, nrow=40, ncol=20, delr=250.0, delc=250.0, top=10, botm=0 + ) + # transformation assigned by arguments + xll, yll, rotation = 500000.0, 2934000.0, 45.0 + + def check_vertices(): + xllp, yllp = pc._paths[0].vertices[0] + assert np.abs(xllp - xll) < 1e-6 + assert np.abs(yllp - yll) < 1e-6 + + m.modelgrid.set_coord_info(xoff=xll, yoff=yll, angrot=rotation) + modelmap = flopy.plot.PlotMapView(model=m) + pc = modelmap.plot_grid() + check_vertices() + + modelmap = flopy.plot.PlotMapView(modelgrid=m.modelgrid) + pc = modelmap.plot_grid() + check_vertices() + + mf = flopy.modflow.Modflow() + + # Model domain and grid definition + dis = flopy.modflow.ModflowDis( + mf, + nlay=1, + nrow=10, + ncol=20, + delr=1.0, + delc=1.0, + ) + xul, yul = 100.0, 210.0 + mg = mf.modelgrid + mf.modelgrid.set_coord_info( + xoff=mg._xul_to_xll(xul, 0.0), yoff=mg._yul_to_yll(yul, 0.0) + ) + verts = [[101.0, 201.0], [119.0, 209.0]] + modelxsect = flopy.plot.PlotCrossSection(model=mf, line={"line": verts}) + patchcollection = modelxsect.plot_grid() + + +@pytest.mark.mf6 +@pytest.mark.xfail(reason="sometimes get wrong collection type") +def test_map_view_bc_gwfs_disv(example_data_path): + mpath = example_data_path / "mf6" / "test003_gwfs_disv" + sim = MFSimulation.load(sim_ws=mpath) + ml6 = sim.get_model("gwf_1") + ml6.modelgrid.set_coord_info(angrot=-14) + mapview = flopy.plot.PlotMapView(model=ml6) + mapview.plot_bc("CHD") + ax = mapview.ax + + if len(ax.collections) == 0: + raise AssertionError("Boundary condition was not drawn") + + for col in ax.collections: + assert isinstance( + col, (QuadMesh, PathCollection) + ), f"Unexpected collection type: {type(col)}" + + +@pytest.mark.mf6 +@pytest.mark.xfail(reason="sometimes get wrong collection type") +def test_map_view_bc_lake2tr(example_data_path): + mpath = example_data_path / "mf6" / "test045_lake2tr" + sim = MFSimulation.load(sim_ws=mpath) + ml6 = sim.get_model("lakeex2a") + mapview = flopy.plot.PlotMapView(model=ml6) + mapview.plot_bc("LAK") + mapview.plot_bc("SFR") + + ax = mapview.ax + if len(ax.collections) == 0: + raise AssertionError("Boundary condition was not drawn") + + for col in ax.collections: + assert isinstance( + col, (QuadMesh, PathCollection) + ), f"Unexpected collection type: {type(col)}" + + +@pytest.mark.mf6 +@pytest.mark.xfail(reason="sometimes get wrong collection type") +def test_map_view_bc_2models_mvr(example_data_path): + mpath = example_data_path / "mf6" / "test006_2models_mvr" + sim = MFSimulation.load(sim_ws=mpath) + ml6 = sim.get_model("parent") + ml6c = sim.get_model("child") + ml6c.modelgrid.set_coord_info(xoff=700, yoff=0, angrot=0) + + mapview = flopy.plot.PlotMapView(model=ml6) + mapview.plot_bc("MAW") + ax = mapview.ax + + assert len(ax.collections) > 0, "Boundary condition was not drawn" + + mapview2 = flopy.plot.PlotMapView(model=ml6c, ax=mapview.ax) + mapview2.plot_bc("MAW") + ax = mapview2.ax + + assert len(ax.collections) > 0, "Boundary condition was not drawn" + + for col in ax.collections: + assert isinstance( + col, (QuadMesh, PathCollection) + ), f"Unexpected collection type: {type(col)}" + + +@pytest.mark.mf6 +@pytest.mark.xfail(reason="sometimes get wrong collection type") +def test_map_view_bc_UZF_3lay(example_data_path): + mpath = example_data_path / "mf6" / "test001e_UZF_3lay" + sim = MFSimulation.load(sim_ws=mpath) + ml6 = sim.get_model("gwf_1") + + mapview = flopy.plot.PlotMapView(model=ml6) + mapview.plot_bc("UZF") + ax = mapview.ax + + if len(ax.collections) == 0: + raise AssertionError("Boundary condition was not drawn") + + for col in ax.collections: + assert isinstance( + col, (QuadMesh, PathCollection) + ), f"Unexpected collection type: {type(col)}" + + +def test_map_view_contour(function_tmpdir): + arr = np.random.rand(10, 10) * 100 + arr[-1, :] = np.nan + delc = np.array([10] * 10, dtype=float) + delr = np.array([8] * 10, dtype=float) + top = np.ones((10, 10), dtype=float) + botm = np.ones((3, 10, 10), dtype=float) + botm[0] = 0.75 + botm[1] = 0.5 + botm[2] = 0.25 + idomain = np.ones((3, 10, 10)) + idomain[0, 0, :] = 0 + vmin = np.nanmin(arr) + vmax = np.nanmax(arr) + levels = np.linspace(vmin, vmax, 7) + + grid = StructuredGrid( + delc=delc, + delr=delr, + top=top, + botm=botm, + idomain=idomain, + lenuni=1, + nlay=3, + nrow=10, + ncol=10, + ) + + pmv = PlotMapView(modelgrid=grid, layer=0) + contours = pmv.contour_array(a=arr) + plt.savefig(function_tmpdir / "map_view_contour.png") + + # if we ever revert from standard contours to tricontours, restore this nan check + # for ix, lev in enumerate(contours.levels): + # if not np.allclose(lev, levels[ix]): + # raise AssertionError("TriContour NaN catch Failed") diff --git a/autotest/test_plot_particle_tracks.py b/autotest/test_plot_particle_tracks.py new file mode 100644 index 0000000000..6116d5fbf7 --- /dev/null +++ b/autotest/test_plot_particle_tracks.py @@ -0,0 +1,174 @@ +from os.path import join + +import numpy as np +import pandas as pd +import pytest +from matplotlib.collections import LineCollection, PathCollection +from modflow_devtools.markers import requires_exe, requires_pkg + +from flopy.modflow import Modflow +from flopy.modpath import Modpath6, Modpath6Bas +from flopy.plot import PlotCrossSection, PlotMapView +from flopy.utils import CellBudgetFile, EndpointFile, HeadFile, PathlineFile + + +@pytest.fixture +def modpath_model(function_tmpdir, example_data_path): + # test with multi-layer example + load_ws = example_data_path / "mp6" + + ml = Modflow.load("EXAMPLE.nam", model_ws=load_ws, exe_name="mf2005") + ml.change_model_ws(function_tmpdir) + ml.write_input() + ml.run_model() + + mp = Modpath6( + modelname="ex6", + exe_name="mp6", + modflowmodel=ml, + model_ws=function_tmpdir, + ) + + mpb = Modpath6Bas( + mp, hdry=ml.lpf.hdry, laytyp=ml.lpf.laytyp, ibound=1, prsity=0.1 + ) + + sim = mp.create_mpsim( + trackdir="forward", + simtype="pathline", + packages="RCH", + start_time=(2, 0, 1.0), + ) + return ml, mp, sim + + +@requires_exe("mf2005", "mp6") +def test_plot_map_view_mp6_plot_pathline(modpath_model): + ml, mp, sim = modpath_model + mp.write_input() + mp.run_model(silent=False) + + pthobj = PathlineFile(join(mp.model_ws, "ex6.mppth")) + well_pathlines = pthobj.get_destination_pathline_data( + dest_cells=[(4, 12, 12)] + ) + + def test_plot(pl): + mx = PlotMapView(model=ml) + mx.plot_grid() + mx.plot_bc("WEL", kper=2, color="blue") + pth = mx.plot_pathline(pl, colors="red") + # plt.show() + assert isinstance(pth, LineCollection) + assert len(pth._paths) == 114 + + # support pathlines as list of recarrays + test_plot(well_pathlines) + + # support pathlines as list of dataframes + test_plot([pd.DataFrame(pl) for pl in well_pathlines]) + + # support pathlines as single recarray + test_plot(np.concatenate(well_pathlines)) + + # support pathlines as single dataframe + test_plot(pd.DataFrame(np.concatenate(well_pathlines))) + + +@pytest.mark.slow +@requires_exe("mf2005", "mp6") +def test_plot_cross_section_mp6_plot_pathline(modpath_model): + ml, mp, sim = modpath_model + mp.write_input() + mp.run_model(silent=False) + + pthobj = PathlineFile(join(mp.model_ws, "ex6.mppth")) + well_pathlines = pthobj.get_destination_pathline_data( + dest_cells=[(4, 12, 12)] + ) + + def test_plot(pl): + mx = PlotCrossSection(model=ml, line={"row": 4}) + mx.plot_bc("WEL", kper=2, color="blue") + pth = mx.plot_pathline(pl, method="cell", colors="red") + assert isinstance(pth, LineCollection) + assert len(pth._paths) == 6 + + # support pathlines as list of recarrays + test_plot(well_pathlines) + + # support pathlines as list of dataframes + test_plot([pd.DataFrame(pl) for pl in well_pathlines]) + + # support pathlines as single recarray + test_plot(np.concatenate(well_pathlines)) + + # support pathlines as single dataframe + test_plot(pd.DataFrame(np.concatenate(well_pathlines))) + + +@requires_exe("mf2005", "mp6") +def test_plot_map_view_mp6_endpoint(modpath_model): + ml, mp, sim = modpath_model + mp.write_input() + mp.run_model(silent=False) + + pthobj = EndpointFile(join(mp.model_ws, "ex6.mpend")) + endpts = pthobj.get_alldata() + + # support endpoints as recarray + assert isinstance(endpts, np.recarray) + mv = PlotMapView(model=ml) + mv.plot_bc("WEL", kper=2, color="blue") + ep = mv.plot_endpoint(endpts, direction="ending") + # plt.show() + assert isinstance(ep, PathCollection) + + # support endpoints as dataframe + mv = PlotMapView(model=ml) + mv.plot_bc("WEL", kper=2, color="blue") + ep = mv.plot_endpoint(pd.DataFrame(endpts), direction="ending") + # plt.show() + assert isinstance(ep, PathCollection) + + # test various possibilities for endpoint color configuration. + # first, color kwarg as scalar + mv = PlotMapView(model=ml) + mv.plot_bc("WEL", kper=2, color="blue") + ep = mv.plot_endpoint(endpts, direction="ending", color="red") + # plt.show() + assert isinstance(ep, PathCollection) + + # c kwarg as array + mv = PlotMapView(model=ml) + mv.plot_bc("WEL", kper=2, color="blue") + ep = mv.plot_endpoint( + endpts, + direction="ending", + c=np.random.rand(625) * -1000, + cmap="viridis", + ) + # plt.show() + assert isinstance(ep, PathCollection) + + # colorbar: color by time to termination + mv = PlotMapView(model=ml) + mv.plot_bc("WEL", kper=2, color="blue") + ep = mv.plot_endpoint( + endpts, direction="ending", shrink=0.5, colorbar=True + ) + # plt.show() + assert isinstance(ep, PathCollection) + + # if both color and c are provided, c takes precedence + mv = PlotMapView(model=ml) + mv.plot_bc("WEL", kper=2, color="blue") + ep = mv.plot_endpoint( + endpts, + direction="ending", + color="red", + c=np.random.rand(625) * -1000, + cmap="viridis", + ) + # plt.show() + assert isinstance(ep, PathCollection) diff --git a/autotest/test_plot_quasi3d.py b/autotest/test_plot_quasi3d.py new file mode 100644 index 0000000000..f26eb03dac --- /dev/null +++ b/autotest/test_plot_quasi3d.py @@ -0,0 +1,154 @@ +import os + +import numpy as np +import pandas as pd +import pytest +from flaky import flaky +from matplotlib import pyplot as plt +from matplotlib import rcParams +from matplotlib.collections import ( + LineCollection, + PatchCollection, + PathCollection, + QuadMesh, +) +from modflow_devtools.markers import requires_exe, requires_pkg + +import flopy +from flopy.discretization import StructuredGrid +from flopy.mf6 import MFSimulation +from flopy.modflow import ( + Modflow, + ModflowBas, + ModflowDis, + ModflowLpf, + ModflowOc, + ModflowPcg, + ModflowWel, +) +from flopy.modpath import Modpath6, Modpath6Bas +from flopy.plot import PlotCrossSection, PlotMapView +from flopy.utils import CellBudgetFile, EndpointFile, HeadFile, PathlineFile + + +@pytest.fixture +def quasi3d_model(function_tmpdir): + mf = Modflow("model_mf", model_ws=function_tmpdir, exe_name="mf2005") + + # Model domain and grid definition + Lx = 1000.0 + Ly = 1000.0 + ztop = 0.0 + zbot = -30.0 + nlay = 3 + nrow = 10 + ncol = 10 + delr = Lx / ncol + delc = Ly / nrow + laycbd = [0] * (nlay) + laycbd[0] = 1 + botm = np.linspace(ztop, zbot, nlay + np.sum(laycbd) + 1)[1:] + + # Create the discretization object + ModflowDis( + mf, + nlay, + nrow, + ncol, + delr=delr, + delc=delc, + top=ztop, + botm=botm, + laycbd=laycbd, + ) + + # Variables for the BAS package + ibound = np.ones((nlay, nrow, ncol), dtype=np.int32) + ibound[:, :, 0] = -1 + ibound[:, :, -1] = -1 + strt = np.ones((nlay, nrow, ncol), dtype=np.float32) + strt[:, :, 0] = 10.0 + strt[:, :, -1] = 0.0 + ModflowBas(mf, ibound=ibound, strt=strt) + + # Add LPF package to the MODFLOW model + ModflowLpf(mf, hk=10.0, vka=10.0, ipakcb=53, vkcb=10) + + # add a well + row = int((nrow - 1) / 2) + col = int((ncol - 1) / 2) + spd = {0: [[1, row, col, -1000]]} + ModflowWel(mf, stress_period_data=spd) + + # Add OC package to the MODFLOW model + spd = {(0, 0): ["save head", "save budget"]} + ModflowOc(mf, stress_period_data=spd, compact=True) + + # Add PCG package to the MODFLOW model + ModflowPcg(mf) + + # Write the MODFLOW model input files + mf.write_input() + + # Run the MODFLOW model + success, buff = mf.run_model() + + assert success, "test_plotting_with_quasi3d_layers() failed" + + return mf + + +@requires_exe("mf2005") +def test_map_plot_with_quasi3d_layers(quasi3d_model): + # read output + hf = HeadFile( + os.path.join(quasi3d_model.model_ws, f"{quasi3d_model.name}.hds") + ) + head = hf.get_data(totim=1.0) + cbb = CellBudgetFile( + os.path.join(quasi3d_model.model_ws, f"{quasi3d_model.name}.cbc") + ) + frf = cbb.get_data(text="FLOW RIGHT FACE", totim=1.0)[0] + fff = cbb.get_data(text="FLOW FRONT FACE", totim=1.0)[0] + flf = cbb.get_data(text="FLOW LOWER FACE", totim=1.0)[0] + + # plot a map + plt.figure() + mv = PlotMapView(model=quasi3d_model, layer=1) + mv.plot_grid() + mv.plot_array(head) + mv.contour_array(head) + mv.plot_ibound() + mv.plot_bc("wel") + mv.plot_vector(frf, fff) + plt.savefig(os.path.join(quasi3d_model.model_ws, "plt01.png")) + + +@requires_exe("mf2005") +def test_cross_section_with_quasi3d_layers(quasi3d_model): + # read output + hf = HeadFile( + os.path.join(quasi3d_model.model_ws, f"{quasi3d_model.name}.hds") + ) + head = hf.get_data(totim=1.0) + cbb = CellBudgetFile( + os.path.join(quasi3d_model.model_ws, f"{quasi3d_model.name}.cbc") + ) + frf = cbb.get_data(text="FLOW RIGHT FACE", totim=1.0)[0] + fff = cbb.get_data(text="FLOW FRONT FACE", totim=1.0)[0] + flf = cbb.get_data(text="FLOW LOWER FACE", totim=1.0)[0] + + # plot a cross-section + plt.figure() + cs = PlotCrossSection( + model=quasi3d_model, + line={"row": int((quasi3d_model.modelgrid.nrow - 1) / 2)}, + ) + cs.plot_grid() + cs.plot_array(head) + cs.contour_array(head) + cs.plot_ibound() + cs.plot_bc("wel") + cs.plot_vector(frf, fff, flf, head=head) + plt.savefig(os.path.join(quasi3d_model.model_ws, "plt02.png")) + plt.close() diff --git a/autotest/test_shapefile_utils.py b/autotest/test_shapefile_utils.py index 30041acdfc..b347806f1b 100644 --- a/autotest/test_shapefile_utils.py +++ b/autotest/test_shapefile_utils.py @@ -17,7 +17,7 @@ from .test_grid import minimal_unstructured_grid_info, minimal_vertex_grid_info -@requires_pkg("pyshp", "shapely") +@requires_pkg("shapefile", "shapely") def test_model_attributes_to_shapefile(example_data_path, function_tmpdir): # freyberg mf2005 model name = "freyberg" @@ -53,7 +53,7 @@ def test_model_attributes_to_shapefile(example_data_path, function_tmpdir): assert shpfile_path.exists() -@requires_pkg("pyproj", "pyshp", "shapely") +@requires_pkg("pyproj", "shapefile", "shapely") def test_write_grid_shapefile( minimal_unstructured_grid_info, minimal_vertex_grid_info, function_tmpdir ): From 9ca601259604b1805d8ec1a30b0a883a7b5f019a Mon Sep 17 00:00:00 2001 From: wpbonelli Date: Fri, 22 Sep 2023 12:29:19 -0400 Subject: [PATCH 057/101] refactor(triangle): raise if output files not found (#1954) --- autotest/test_triangle.py | 20 ++++++++++++++++++++ flopy/utils/triangle.py | 29 ++++------------------------- 2 files changed, 24 insertions(+), 25 deletions(-) create mode 100644 autotest/test_triangle.py diff --git a/autotest/test_triangle.py b/autotest/test_triangle.py new file mode 100644 index 0000000000..f00260a031 --- /dev/null +++ b/autotest/test_triangle.py @@ -0,0 +1,20 @@ +import shutil +from os import environ +from platform import system + +import pytest +from modflow_devtools.markers import requires_exe +from modflow_devtools.misc import set_env + +import flopy +from flopy.utils.triangle import Triangle + + +@requires_exe("mf6") +def test_output_files_not_found(function_tmpdir): + tri = Triangle(model_ws=function_tmpdir, maximum_area=1.0, angle=30) + + # if expected output files are not found, + # Triangle should raise FileNotFoundError + with pytest.raises(FileNotFoundError): + tri._load_results() diff --git a/flopy/utils/triangle.py b/flopy/utils/triangle.py index d974d367fa..9d7c5bb7b4 100644 --- a/flopy/utils/triangle.py +++ b/flopy/utils/triangle.py @@ -57,7 +57,6 @@ def __init__( self._nodes = nodes self.additional_args = additional_args self._initialize_vars() - return def add_polygon(self, polygon): """ @@ -90,7 +89,6 @@ def add_polygon(self, polygon): if len(polygon) > 1: for hole in polygon[1:]: self.add_hole(hole) - return def add_hole(self, hole): """ @@ -107,7 +105,6 @@ def add_hole(self, hole): """ self._holes.append(hole) - return def add_region(self, point, attribute=0, maximum_area=None): """ @@ -131,7 +128,6 @@ def add_region(self, point, attribute=0, maximum_area=None): """ self._regions.append([point, attribute, maximum_area]) - return def build(self, verbose=False): """ @@ -196,8 +192,6 @@ def build(self, verbose=False): for row in self.ele: self.iverts.append([row[1], row[2], row[3]]) - return - def plot( self, ax=None, @@ -320,7 +314,6 @@ def plot_boundary(self, ibm, ax=None, **kwargs): y1 = self.node["y"][iv1] y2 = self.node["y"][iv2] ax.plot([x1, x2], [y1, y2], **kwargs) - return def plot_vertices(self, ax=None, **kwargs): """ @@ -342,7 +335,6 @@ def plot_vertices(self, ax=None, **kwargs): if ax is None: ax = plt.gca() ax.plot(self.node["x"], self.node["y"], lw=0, **kwargs) - return def label_vertices(self, ax=None, onebased=True, **kwargs): """ @@ -374,7 +366,6 @@ def label_vertices(self, ax=None, onebased=True, **kwargs): if onebased: s += 1 ax.text(x, y, str(s), **kwargs) - return def plot_centroids(self, ax=None, **kwargs): """ @@ -397,7 +388,6 @@ def plot_centroids(self, ax=None, **kwargs): ax = plt.gca() xcyc = self.get_xcyc() ax.plot(xcyc[:, 0], xcyc[:, 1], lw=0, **kwargs) - return def label_cells(self, ax=None, onebased=True, **kwargs): """ @@ -430,7 +420,6 @@ def label_cells(self, ax=None, onebased=True, **kwargs): if onebased: s += 1 ax.text(x, y, str(s), **kwargs) - return def get_xcyc(self): """ @@ -600,7 +589,6 @@ def clean(self): os.remove(fname) if os.path.isfile(fname): print(f"Could not remove: {fname}") - return def _initialize_vars(self): self.file_prefix = "_triangle" @@ -613,7 +601,6 @@ def _initialize_vars(self): self.verts = None self.iverts = None self.edgedict = None - return def _load_results(self): # node file @@ -621,8 +608,7 @@ def _load_results(self): dt = [("ivert", int), ("x", float), ("y", float)] fname = os.path.join(self.model_ws, f"{self.file_prefix}.1.{ext}") setattr(self, ext, None) - if os.path.isfile(fname): - f = open(fname, "r") + with open(fname, "r") as f: line = f.readline() f.close() ll = line.strip().split() @@ -644,8 +630,7 @@ def _load_results(self): dt = [("icell", int), ("iv1", int), ("iv2", int), ("iv3", int)] fname = os.path.join(self.model_ws, f"{self.file_prefix}.1.{ext}") setattr(self, ext, None) - if os.path.isfile(fname): - f = open(fname, "r") + with open(fname, "r") as f: line = f.readline() f.close() ll = line.strip().split() @@ -664,8 +649,7 @@ def _load_results(self): dt = [("iedge", int), ("endpoint1", int), ("endpoint2", int)] fname = os.path.join(self.model_ws, f"{self.file_prefix}.1.{ext}") setattr(self, ext, None) - if os.path.isfile(fname): - f = open(fname, "r") + with open(fname, "r") as f: line = f.readline() f.close() ll = line.strip().split() @@ -687,8 +671,7 @@ def _load_results(self): ] fname = os.path.join(self.model_ws, f"{self.file_prefix}.1.{ext}") setattr(self, ext, None) - if os.path.isfile(fname): - f = open(fname, "r") + with open(fname, "r") as f: line = f.readline() f.close() ll = line.strip().split() @@ -699,8 +682,6 @@ def _load_results(self): assert a.shape[0] == ncells setattr(self, ext, a) - return - def _write_nodefile(self, fname): f = open(fname, "w") nvert = 0 @@ -776,7 +757,6 @@ def _write_polyfile(self, fname): f.write(s) f.close() - return def _create_edge_dict(self): """ @@ -789,4 +769,3 @@ def _create_edge_dict(self): edgedict[(iv1, iv2)] = iseg edgedict[(iv2, iv1)] = iseg self.edgedict = edgedict - return From 1a862d34463c24ef93ecf2dd41fc0395a39604d7 Mon Sep 17 00:00:00 2001 From: wpbonelli Date: Fri, 22 Sep 2023 21:58:17 -0400 Subject: [PATCH 058/101] docs: remove redundant example scripts, update RTD layout (#1937) * convert remaining swi2 examples to notebooks (with descriptions from tm6-a46 publication) * reorder sections in ReadTheDocs for readability --- .docs/Notebooks/swi2package_example2.py | 500 ++++++++++++++++++ .docs/Notebooks/swi2package_example3.py | 362 +++++++++++++ .docs/Notebooks/swi2package_example5.py | 662 ++++++++++++++++++++++++ .docs/create_rstfiles.py | 2 +- .docs/examples.rst | 5 +- .docs/main.rst | 5 +- .docs/utilities.rst | 5 +- autotest/test_example_scripts.py | 44 -- examples/scripts/flopy_henry.py | 221 -------- examples/scripts/flopy_lake_example.py | 163 ------ examples/scripts/flopy_swi2_ex1.py | 288 ----------- examples/scripts/flopy_swi2_ex2.py | 497 ------------------ examples/scripts/flopy_swi2_ex3.py | 335 ------------ examples/scripts/flopy_swi2_ex4.py | 604 --------------------- examples/scripts/flopy_swi2_ex5.py | 647 ----------------------- 15 files changed, 1534 insertions(+), 2806 deletions(-) create mode 100644 .docs/Notebooks/swi2package_example2.py create mode 100644 .docs/Notebooks/swi2package_example3.py create mode 100644 .docs/Notebooks/swi2package_example5.py delete mode 100644 autotest/test_example_scripts.py delete mode 100644 examples/scripts/flopy_henry.py delete mode 100644 examples/scripts/flopy_lake_example.py delete mode 100644 examples/scripts/flopy_swi2_ex1.py delete mode 100644 examples/scripts/flopy_swi2_ex2.py delete mode 100644 examples/scripts/flopy_swi2_ex3.py delete mode 100644 examples/scripts/flopy_swi2_ex4.py delete mode 100644 examples/scripts/flopy_swi2_ex5.py diff --git a/.docs/Notebooks/swi2package_example2.py b/.docs/Notebooks/swi2package_example2.py new file mode 100644 index 0000000000..40e728154a --- /dev/null +++ b/.docs/Notebooks/swi2package_example2.py @@ -0,0 +1,500 @@ +# --- +# jupyter: +# jupytext: +# notebook_metadata_filter: all +# text_representation: +# extension: .py +# format_name: light +# format_version: '1.5' +# jupytext_version: 1.15.0 +# kernelspec: +# display_name: Python 3 +# language: python +# name: python3 +# language_info: +# codemirror_mode: +# name: ipython +# version: 3 +# file_extension: .py +# mimetype: text/x-python +# name: python +# nbconvert_exporter: python +# pygments_lexer: ipython3 +# version: 3.11.2 +# metadata: +# section: mf2005 +# --- + +# # SWI2 Example 2. Rotating Brackish Zone +# +# This example problem modifies the rotating interface problem, with 3 zones, no boundary inflow, impermeable aquifer top and bottoms, and ignoring storage changes. The problem domain is 300m long, 40m high, and 1m wide. The aquifer is confined. At x=0, there is a constant head of 0m. +# +# The grid discretization has 60 columns, 1 row, and 1 layer, and delr of 5m, delc 1m, and 40m height. The time discretization is a single period with 1000 time steps, each of 2 days. +# +# There are three groundwater zones: freshwater, brackish, and seawater. The zones are separated by two active ZETA surfaces representing the 25% and 75% seawater salinity contours. Fluid density is represented using the stratified option (ISTRAT=1). The maximum slope of the toe and tip is specified as 0.4, and default tip and toe parameters are used (ALPHA=BETA=0.1). At time t = 0, both interfaces are straight and oriented 45° from horizontal. Initial ZETA surfaces 1 and 2 extend from (x,z) = (150,0) to (x,z) = (190,-40), and from (x,z) = (110,0) to (x,z) = (150,-40), respectively. The brackish zone rotates toward a horizontal position over time. + +# Import dependencies. + +# + +import os +import sys +from pathlib import Path +from tempfile import TemporaryDirectory + +import matplotlib as mpl +import matplotlib.pyplot as plt +import numpy as np + +import flopy + +print(sys.version) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") +# - + +# Modify default matplotlib settings. +updates = { + "font.family": ["Arial"], + "mathtext.default": "regular", + "pdf.compression": 0, + "pdf.fonttype": 42, + "legend.fontsize": 7, + "axes.labelsize": 8, + "xtick.labelsize": 7, + "ytick.labelsize": 7, +} +plt.rcParams.update(updates) + +# Define model name and the location of the MODFLOW executable (assumed available on the path). + +modelname = "swiex2" +exe_name = "mf2005" + +# Create a temporary workspace. + +temp_dir = TemporaryDirectory() +workspace = temp_dir.name + +# Create nested working directories. + +dirs = [os.path.join(workspace, "SWI2"), os.path.join(workspace, "SEAWAT")] +for d in dirs: + if not os.path.exists(d): + os.mkdir(d) + +# Define model discretization information. + +nper = 1 +perlen = 2000 +nstp = 1000 +nlay, nrow, ncol = 1, 1, 60 +delr = 5.0 +nsurf = 2 +x = np.arange(0.5 * delr, ncol * delr, delr) +xedge = np.linspace(0, float(ncol) * delr, len(x) + 1) +ibound = np.ones((nrow, ncol), int) +ibound[0, 0] = -1 + +# Define SWI2 data. + +z0 = np.zeros((nlay, nrow, ncol), float) +z1 = np.zeros((nlay, nrow, ncol), float) +z0[0, 0, 30:38] = np.arange(-2.5, -40, -5) +z0[0, 0, 38:] = -40 +z1[0, 0, 22:30] = np.arange(-2.5, -40, -5) +z1[0, 0, 30:] = -40 +z = [] +z.append(z0) +z.append(z1) +ssz = 0.2 +isource = np.ones((nrow, ncol), "int") +isource[0, 0] = 2 + +# Create a stratified model and specify that it is a MODFLOW 2005 model. + +modelname = "swiex2_strat" +print("creating...", modelname) +ml = flopy.modflow.Modflow( + modelname, version="mf2005", exe_name=exe_name, model_ws=dirs[0] +) +discret = flopy.modflow.ModflowDis( + ml, + nlay=1, + ncol=ncol, + nrow=nrow, + delr=delr, + delc=1, + top=0, + botm=[-40.0], + nper=nper, + perlen=perlen, + nstp=nstp, +) +bas = flopy.modflow.ModflowBas(ml, ibound=ibound, strt=0.05) +bcf = flopy.modflow.ModflowBcf(ml, laycon=0, tran=2 * 40) +swi = flopy.modflow.ModflowSwi2( + ml, + iswizt=55, + nsrf=nsurf, + istrat=1, + toeslope=0.2, + tipslope=0.2, + nu=[0, 0.0125, 0.025], + zeta=z, + ssz=ssz, + isource=isource, + nsolver=1, +) +oc = flopy.modflow.ModflowOc(ml, stress_period_data={(0, 999): ["save head"]}) +pcg = flopy.modflow.ModflowPcg(ml) + +# Write input files and run the stratified model. + +ml.write_input() +success, buff = ml.run_model(silent=True, report=True) +assert success, "Failed to run." + +# Load results from the stratified model. + +zetafile = os.path.join(dirs[0], f"{modelname}.zta") +zobj = flopy.utils.CellBudgetFile(zetafile) +zkstpkper = zobj.get_kstpkper() +zeta = zobj.get_data(kstpkper=zkstpkper[-1], text="ZETASRF 1")[0] +zeta2 = zobj.get_data(kstpkper=zkstpkper[-1], text="ZETASRF 2")[0] + +# Define VD model. + +modelname = "swiex2_vd" +print("creating...", modelname) +ml = flopy.modflow.Modflow( + modelname, version="mf2005", exe_name=exe_name, model_ws=dirs[0] +) +discret = flopy.modflow.ModflowDis( + ml, + nlay=1, + ncol=ncol, + nrow=nrow, + delr=delr, + delc=1, + top=0, + botm=[-40.0], + nper=nper, + perlen=perlen, + nstp=nstp, +) +bas = flopy.modflow.ModflowBas(ml, ibound=ibound, strt=0.05) +bcf = flopy.modflow.ModflowBcf(ml, laycon=0, tran=2 * 40) +swi = flopy.modflow.ModflowSwi2( + ml, + iswizt=55, + nsrf=nsurf, + istrat=0, + toeslope=0.2, + tipslope=0.2, + nu=[0, 0, 0.025, 0.025], + zeta=z, + ssz=ssz, + isource=isource, + nsolver=1, +) +oc = flopy.modflow.ModflowOc(ml, stress_period_data={(0, 999): ["save head"]}) +pcg = flopy.modflow.ModflowPcg(ml) + +# Write input files and run VD model. + +ml.write_input() +success, buff = ml.run_model(silent=True, report=True) +assert success, "Failed to run." + +# Load VD model results. + +zetafile = os.path.join(dirs[0], f"{modelname}.zta") +zobj = flopy.utils.CellBudgetFile(zetafile) +zkstpkper = zobj.get_kstpkper() +zetavd = zobj.get_data(kstpkper=zkstpkper[-1], text="ZETASRF 1")[0] +zetavd2 = zobj.get_data(kstpkper=zkstpkper[-1], text="ZETASRF 2")[0] + +# Define SEAWAT model. + +# + +swtexe_name = "swtv4" +modelname = "swiex2_swt" +print("creating...", modelname) +swt_xmax = 300.0 +swt_zmax = 40.0 +swt_delr = 1.0 +swt_delc = 1.0 +swt_delz = 0.5 +swt_ncol = int(swt_xmax / swt_delr) # 300 +swt_nrow = 1 +swt_nlay = int(swt_zmax / swt_delz) # 80 +print(swt_nlay, swt_nrow, swt_ncol) +swt_ibound = np.ones((swt_nlay, swt_nrow, swt_ncol), int) +# swt_ibound[0, swt_ncol-1, 0] = -1 +swt_ibound[0, 0, 0] = -1 +swt_x = np.arange(0.5 * swt_delr, swt_ncol * swt_delr, swt_delr) +swt_xedge = np.linspace(0, float(ncol) * delr, len(swt_x) + 1) +swt_top = 0.0 +z0 = swt_top +swt_botm = np.zeros((swt_nlay), float) +swt_z = np.zeros((swt_nlay), float) +zcell = -swt_delz / 2.0 +for ilay in range(0, swt_nlay): + z0 -= swt_delz + swt_botm[ilay] = z0 + swt_z[ilay] = zcell + zcell -= swt_delz +# swt_X, swt_Z = np.meshgrid(swt_x, swt_botm) +swt_X, swt_Z = np.meshgrid(swt_x, swt_z) +# mt3d +# mt3d boundary array set to all active +icbund = np.ones((swt_nlay, swt_nrow, swt_ncol), int) +# create initial concentrations for MT3D +sconc = np.ones((swt_nlay, swt_nrow, swt_ncol), float) +sconcp = np.zeros((swt_nlay, swt_ncol), float) +xsb = 110 +xbf = 150 +for ilay in range(0, swt_nlay): + for icol in range(0, swt_ncol): + if swt_x[icol] > xsb: + sconc[ilay, 0, icol] = 0.5 + if swt_x[icol] > xbf: + sconc[ilay, 0, icol] = 0.0 + for icol in range(0, swt_ncol): + sconcp[ilay, icol] = sconc[ilay, 0, icol] + xsb += swt_delz + xbf += swt_delz + +# ssm data +itype = flopy.mt3d.Mt3dSsm.itype_dict() +ssm_data = {0: [0, 0, 0, 35.0, itype["BAS6"]]} + +# print sconcp +# mt3d print times +timprs = (np.arange(5) + 1) * 2000.0 +nprs = len(timprs) +# create the MODFLOW files +m = flopy.seawat.Seawat(modelname, exe_name=swtexe_name, model_ws=dirs[1]) +discret = flopy.modflow.ModflowDis( + m, + nrow=swt_nrow, + ncol=swt_ncol, + nlay=swt_nlay, + delr=swt_delr, + delc=swt_delc, + laycbd=0, + top=swt_top, + botm=swt_botm, + nper=nper, + perlen=perlen, + nstp=1, + steady=False, +) +bas = flopy.modflow.ModflowBas(m, ibound=swt_ibound, strt=0.05) +lpf = flopy.modflow.ModflowLpf( + m, hk=2.0, vka=2.0, ss=0.0, sy=0.0, laytyp=0, layavg=0 +) +oc = flopy.modflow.ModflowOc(m, save_every=1, save_types=["save head"]) +pcg = flopy.modflow.ModflowPcg(m) +# Create the MT3DMS model files +adv = flopy.mt3d.Mt3dAdv( + m, + mixelm=-1, # -1 is TVD + percel=0.05, + nadvfd=0, + # 0 or 1 is upstream; 2 is central in space + # particle based methods + nplane=4, + mxpart=1e7, + itrack=2, + dceps=1e-4, + npl=16, + nph=16, + npmin=8, + npmax=256, +) +btn = flopy.mt3d.Mt3dBtn( + m, + icbund=1, + prsity=ssz, + sconc=sconc, + ifmtcn=-1, + chkmas=False, + nprobs=10, + nprmas=10, + dt0=0.0, + ttsmult=1.2, + ttsmax=100.0, + ncomp=1, + nprs=nprs, + timprs=timprs, + mxstrn=1e8, +) +dsp = flopy.mt3d.Mt3dDsp(m, al=0.0, trpt=1.0, trpv=1.0, dmcoef=0.0) +gcg = flopy.mt3d.Mt3dGcg( + m, mxiter=1, iter1=50, isolve=3, cclose=1e-6, iprgcg=5 +) +ssm = flopy.mt3d.Mt3dSsm(m, stress_period_data=ssm_data) +# Create the SEAWAT model files +vdf = flopy.seawat.SeawatVdf( + m, + nswtcpl=1, + iwtable=0, + densemin=0, + densemax=0, + denseref=1000.0, + denseslp=25.0, + firstdt=1.0e-03, +) +# - + +# Write input files and run SEAWAT model. + +m.write_input() +success, buff = m.run_model(silent=True, report=True) +assert success, "Failed to run." + +# Load SEAWAT model results. + +ucnfile = os.path.join(dirs[1], "MT3D001.UCN") +uobj = flopy.utils.UcnFile(ucnfile) +times = uobj.get_times() +print(times) +ukstpkper = uobj.get_kstpkper() +print(ukstpkper) +c = uobj.get_data(totim=times[-1]) +conc = np.zeros((swt_nlay, swt_ncol), float) +for icol in range(0, swt_ncol): + for ilay in range(0, swt_nlay): + conc[ilay, icol] = c[ilay, 0, icol] + +# Create plots. + +# + +# figure +fwid = 7.0 # 6.50 +fhgt = 4.5 # 6.75 +flft = 0.125 +frgt = 0.95 +fbot = 0.125 +ftop = 0.925 + +print("creating cross-section figure...") +xsf, axes = plt.subplots(3, 1, figsize=(fwid, fhgt), facecolor="w") +xsf.subplots_adjust( + wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop +) +# plot initial conditions +ax = axes[0] +ax.text( + -0.075, + 1.05, + "A", + transform=ax.transAxes, + va="center", + ha="center", + size="8", +) +# text(.975, .1, '(a)', transform = ax.transAxes, va = 'center', ha = 'center') +ax.plot([110, 150], [0, -40], "k") +ax.plot([150, 190], [0, -40], "k") +ax.set_xlim(0, 300) +ax.set_ylim(-40, 0) +ax.set_yticks(np.arange(-40, 1, 10)) +ax.text(50, -20, "salt", va="center", ha="center") +ax.text(150, -20, "brackish", va="center", ha="center") +ax.text(250, -20, "fresh", va="center", ha="center") +ax.set_ylabel("Elevation, in meters") +# plot stratified swi2 and seawat results +ax = axes[1] +ax.text( + -0.075, + 1.05, + "B", + transform=ax.transAxes, + va="center", + ha="center", + size="8", +) +# +zp = zeta[0, 0, :] +p = (zp < 0.0) & (zp > -40.0) +ax.plot(x[p], zp[p], "b", linewidth=1.5, drawstyle="steps-mid") +zp = zeta2[0, 0, :] +p = (zp < 0.0) & (zp > -40.0) +ax.plot(x[p], zp[p], "b", linewidth=1.5, drawstyle="steps-mid") +# seawat data +cc = ax.contour( + swt_X, + swt_Z, + conc, + levels=[0.25, 0.75], + colors="k", + linestyles="solid", + linewidths=0.75, + zorder=101, +) +# fake figures +ax.plot([-100.0, -100], [-100.0, -100], "b", linewidth=1.5, label="SWI2") +ax.plot([-100.0, -100], [-100.0, -100], "k", linewidth=0.75, label="SEAWAT") +# legend +leg = ax.legend(loc="lower left", numpoints=1) +leg._drawFrame = False +# axes +ax.set_xlim(0, 300) +ax.set_ylim(-40, 0) +ax.set_yticks(np.arange(-40, 1, 10)) +ax.set_ylabel("Elevation, in meters") +# plot vd model +ax = axes[2] +ax.text( + -0.075, + 1.05, + "C", + transform=ax.transAxes, + va="center", + ha="center", + size="8", +) +dr = zeta[0, 0, :] +ax.plot(x, dr, "b", linewidth=1.5, drawstyle="steps-mid") +dr = zeta2[0, 0, :] +ax.plot(x, dr, "b", linewidth=1.5, drawstyle="steps-mid") +dr = zetavd[0, 0, :] +ax.plot(x, dr, "r", linewidth=0.75, drawstyle="steps-mid") +dr = zetavd2[0, 0, :] +ax.plot(x, dr, "r", linewidth=0.75, drawstyle="steps-mid") +# fake figures +ax.plot( + [-100.0, -100], + [-100.0, -100], + "b", + linewidth=1.5, + label="SWI2 stratified option", +) +ax.plot( + [-100.0, -100], + [-100.0, -100], + "r", + linewidth=0.75, + label="SWI2 continuous option", +) +# legend +leg = ax.legend(loc="lower left", numpoints=1) +leg._drawFrame = False +# axes +ax.set_xlim(0, 300) +ax.set_ylim(-40, 0) +ax.set_yticks(np.arange(-40, 1, 10)) +ax.set_xlabel("Horizontal distance, in meters") +ax.set_ylabel("Elevation, in meters") +# - + +# Clean up the temporary workspace. + +try: + # ignore PermissionError on Windows + temp_dir.cleanup() +except: + pass diff --git a/.docs/Notebooks/swi2package_example3.py b/.docs/Notebooks/swi2package_example3.py new file mode 100644 index 0000000000..415df76f50 --- /dev/null +++ b/.docs/Notebooks/swi2package_example3.py @@ -0,0 +1,362 @@ +# --- +# jupyter: +# jupytext: +# notebook_metadata_filter: all +# text_representation: +# extension: .py +# format_name: light +# format_version: '1.5' +# jupytext_version: 1.15.0 +# kernelspec: +# display_name: Python 3 +# language: python +# name: python3 +# language_info: +# codemirror_mode: +# name: ipython +# version: 3 +# file_extension: .py +# mimetype: text/x-python +# name: python +# nbconvert_exporter: python +# pygments_lexer: ipython3 +# version: 3.11.2 +# metadata: +# section: mf2005 +# --- + +# # SWI2 Example 3. Freshwater-seawater interface movement in a two-aquifer coastal system +# +# This example problem simulates transient movement of the freshwater-seawater interface in response to changing freshwater inflow in a two-aquifer coastal aquifer system. The problem domain is 4,000m long, 41m high, and 1m wide. Both aquifers are 20m thick and are separated by a leaky layer 1m thick. The aquifers are confined, storage changes are not considered (all MODFLOW stress periods are steady-state), and the top and bottom of each aquifer is horizontal. The top of the upper aquifer and bottom of the lower aquifer are impermeable. +# +# The domain is discretized into 200 columns that are each 20m long (DELR), 1 row that is 1m wide (DELC), and 3 layers that are 20, 1, and 20m thick. A total of 2,000 years are simulated using two 1,000-year stress periods and a constant time step of 2 years. The hydraulic conductivity of the top and bottom aquifer are 2 and 4 m/d, respectively, and the horizon- tal and vertical hydraulic conductivity of the confining unit are 1 and 0.01 m/d, respectively. The effective porosity is 0.2 for all model layers. +# +# The left 600 m of the model domain extends offshore and the ocean boundary is represented as a general head bound- ary condition (GHB) at the top of model layer 1. A freshwater head of 0 m is specified at the ocean bottom in all general head boundaries. The GHB conductance that controls outflow from the aquifer into the ocean is 0.4 square meter per day (m2/d) and corresponds to a leakance of 0.02 d-1 (or a resistance of 50 days). +# +# The groundwater is divided into a freshwater zone and a seawater zone, separated by an active ZETA surface, ζ2, between the zones (NSRF=1) that approximates the 50-percent seawater salinity contour. Fluid density is represented using the stratified density option (ISTRAT=1). The dimensionless densities, v, of the freshwater and saltwater are 0.0 and 0.025. The tip and toe tracking parameters are a TOESLOPE of 0.02 and a TIPSLOPE of 0.04, a default ALPHA of 0.1, and a default BETA of 0.1. Initially, the interface between freshwater and seawater is straight, is at the top of aquifer 1 at x = -100, and has a slope of -0.025 m/m. The SWI2 ISOURCE parameter is set to -2 in cells having GHBs so that water that infiltrates into the aquifer from the GHB cells is saltwater (zone 2), whereas water that flows out of the model at the GHB cells is of the same type as the water at the top of the aquifer. In all other cells, the SWI2 ISOURCE parameter is set to 0, indicating boundary conditions have water that is identical to water at the top of the aquifer. +# +# Initially, the net freshwater inflow rate of 0.03 m3/d specified at the right boundary causes flow to occur towards the ocean. The flow in each layer is distributed in proportion to the aquifer transmissivities. During the first 1,000-year stress period, a freshwater source of 0.01 m3/d is specified in the right-most cell (column 200) of the top aquifer, and a freshwater source of 0.02 m3/d is specified in the right-most cell (column 200) of the bottom aquifer. During the second 1,000-year stress period, these values are halved to reduce the net freshwater inflow to 0.015 m3/d, which is distributed in proportion to the transmissivities of both aquifers at the right boundary. + +# Import dependencies. + +# + +import os +import sys +from tempfile import TemporaryDirectory + +import matplotlib as mpl +import matplotlib.pyplot as plt +import numpy as np + +import flopy + +print(sys.version) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") +# - + +# Modify default matplotlib settings +updates = { + "font.family": ["Arial"], + "mathtext.default": "regular", + "pdf.compression": 0, + "pdf.fonttype": 42, + "legend.fontsize": 7, + "axes.labelsize": 8, + "xtick.labelsize": 7, + "ytick.labelsize": 7, +} +plt.rcParams.update(updates) + +# Define some utility functions. + + +def MergeData(ndim, zdata, tb): + sv = 0.05 + md = np.empty((ndim), float) + md.fill(np.nan) + found = np.empty((ndim), bool) + found.fill(False) + for idx, layer in enumerate(zdata): + for jdx, z in enumerate(layer): + if found[jdx] == True: + continue + t0 = tb[idx][0] - sv + t1 = tb[idx][1] + sv + if z < t0 and z > t1: + md[jdx] = z + found[jdx] = True + return md + + +def LegBar(ax, x0, y0, t0, dx, dy, dt, cc): + for c in cc: + ax.plot([x0, x0 + dx], [y0, y0], color=c, linewidth=4) + ctxt = f"{t0:=3d} years" + ax.text(x0 + 2.0 * dx, y0 + dy / 2.0, ctxt, size=5) + y0 += dy + t0 += dt + + +# Define model name and the location of the MODFLOW executable (assumed available on the path). + +modelname = "swiex3" +exe_name = "mf2005" + +# Create a temporary workspace. + +temp_dir = TemporaryDirectory() +workspace = temp_dir.name + +# Define model discretization. + +nlay = 3 +nrow = 1 +ncol = 200 +delr = 20.0 +delc = 1.0 + +# Define well data. + +lrcQ1 = np.array([(0, 0, 199, 0.01), (2, 0, 199, 0.02)]) +lrcQ2 = np.array([(0, 0, 199, 0.01 * 0.5), (2, 0, 199, 0.02 * 0.5)]) + +# Define general head boundary data. + +lrchc = np.zeros((30, 5)) +lrchc[:, [0, 1, 3, 4]] = [0, 0, 0.0, 0.8 / 2.0] +lrchc[:, 2] = np.arange(0, 30) + +# Define SWI2 data. + +zini = np.hstack( + (-9 * np.ones(24), np.arange(-9, -50, -0.5), -50 * np.ones(94)) +)[np.newaxis, :] +iso = np.zeros((1, 200), dtype=int) +iso[:, :30] = -2 + +# Create the flow model. + +ml = flopy.modflow.Modflow( + modelname, version="mf2005", exe_name=exe_name, model_ws=workspace +) +discret = flopy.modflow.ModflowDis( + ml, + nrow=nrow, + ncol=ncol, + nlay=3, + delr=delr, + delc=delc, + laycbd=[0, 0, 0], + top=-9.0, + botm=[-29, -30, -50], + nper=2, + perlen=[365 * 1000, 1000 * 365], + nstp=[500, 500], +) +bas = flopy.modflow.ModflowBas(ml, ibound=1, strt=1.0) +bcf = flopy.modflow.ModflowBcf( + ml, laycon=[0, 0, 0], tran=[40.0, 1, 80.0], vcont=[0.005, 0.005] +) +wel = flopy.modflow.ModflowWel(ml, stress_period_data={0: lrcQ1, 1: lrcQ2}) +ghb = flopy.modflow.ModflowGhb(ml, stress_period_data={0: lrchc}) +swi = flopy.modflow.ModflowSwi2( + ml, + iswizt=55, + nsrf=1, + istrat=1, + toeslope=0.01, + tipslope=0.04, + nu=[0, 0.025], + zeta=[zini, zini, zini], + ssz=0.2, + isource=iso, + nsolver=1, +) +oc = flopy.modflow.ModflowOc(ml, save_every=100, save_types=["save head"]) +pcg = flopy.modflow.ModflowPcg(ml) + +# Write the model input files. + +ml.write_input() + +# Run the model. + +success, buff = ml.run_model(silent=True, report=True) +assert success, "Failed to run." + +# Load results. + +headfile = os.path.join(workspace, f"{modelname}.hds") +hdobj = flopy.utils.HeadFile(headfile) +head = hdobj.get_data(totim=3.65000e05) + +zetafile = os.path.join(workspace, f"{modelname}.zta") +zobj = flopy.utils.CellBudgetFile(zetafile) +zkstpkper = zobj.get_kstpkper() +zeta = [] +for kk in zkstpkper: + zeta.append(zobj.get_data(kstpkper=kk, text="ZETASRF 1")[0]) +zeta = np.array(zeta) + +fwid, fhgt = 7.00, 4.50 +flft, frgt, fbot, ftop = 0.125, 0.95, 0.125, 0.925 + +# Plot results. + +# + +colormap = plt.cm.plasma # winter +cc = [] +icolor = 11 +cr = np.linspace(0.0, 0.9, icolor) +for idx in cr: + cc.append(colormap(idx)) +lw = 0.5 + +x = np.arange(-30 * delr + 0.5 * delr, (ncol - 30) * delr, delr) +xedge = np.linspace(-30.0 * delr, (ncol - 30.0) * delr, len(x) + 1) +zedge = [[-9.0, -29.0], [-29.0, -30.0], [-30.0, -50.0]] + +fig = plt.figure(figsize=(fwid, fhgt), facecolor="w") +fig.subplots_adjust( + wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop +) + +ax = fig.add_subplot(311) +ax.text( + -0.075, + 1.05, + "A", + transform=ax.transAxes, + va="center", + ha="center", + size="8", +) +# confining unit +ax.fill( + [-600, 3400, 3400, -600], + [-29, -29, -30, -30], + fc=[0.8, 0.8, 0.8], + ec=[0.8, 0.8, 0.8], +) +# +z = np.copy(zini[0, :]) +zr = z.copy() +p = (zr < -9.0) & (zr > -50.0) +ax.plot(x[p], zr[p], color=cc[0], linewidth=lw, drawstyle="steps-mid") +# +for i in range(5): + zt = MergeData( + ncol, [zeta[i, 0, 0, :], zeta[i, 1, 0, :], zeta[i, 2, 0, :]], zedge + ) + dr = zt.copy() + ax.plot(x, dr, color=cc[i + 1], linewidth=lw, drawstyle="steps-mid") +# Manufacture a legend bar +LegBar(ax, -200.0, -33.75, 0, 25, -2.5, 200, cc[0:6]) +# axes +ax.set_ylim(-50, -9) +ax.set_ylabel("Elevation, in meters") +ax.set_xlim(-250.0, 2500.0) + +ax = fig.add_subplot(312) +ax.text( + -0.075, + 1.05, + "B", + transform=ax.transAxes, + va="center", + ha="center", + size="8", +) +# confining unit +ax.fill( + [-600, 3400, 3400, -600], + [-29, -29, -30, -30], + fc=[0.8, 0.8, 0.8], + ec=[0.8, 0.8, 0.8], +) +# +for i in range(4, 10): + zt = MergeData( + ncol, [zeta[i, 0, 0, :], zeta[i, 1, 0, :], zeta[i, 2, 0, :]], zedge + ) + dr = zt.copy() + ax.plot(x, dr, color=cc[i + 1], linewidth=lw, drawstyle="steps-mid") +# Manufacture a legend bar +LegBar(ax, -200.0, -33.75, 1000, 25, -2.5, 200, cc[5:11]) +# axes +ax.set_ylim(-50, -9) +ax.set_ylabel("Elevation, in meters") +ax.set_xlim(-250.0, 2500.0) + +ax = fig.add_subplot(313) +ax.text( + -0.075, + 1.05, + "C", + transform=ax.transAxes, + va="center", + ha="center", + size="8", +) +# confining unit +ax.fill( + [-600, 3400, 3400, -600], + [-29, -29, -30, -30], + fc=[0.8, 0.8, 0.8], + ec=[0.8, 0.8, 0.8], +) +# +zt = MergeData( + ncol, [zeta[4, 0, 0, :], zeta[4, 1, 0, :], zeta[4, 2, 0, :]], zedge +) +ax.plot( + x, + zt, + marker="o", + markersize=3, + linewidth=0.0, + markeredgecolor="blue", + markerfacecolor="None", +) +# ghyben herzberg +zeta1 = -9 - 40.0 * (head[0, 0, :]) +gbh = np.empty(len(zeta1), float) +gbho = np.empty(len(zeta1), float) +for idx, z1 in enumerate(zeta1): + if z1 >= -9.0 or z1 <= -50.0: + gbh[idx] = np.nan + gbho[idx] = 0.0 + else: + gbh[idx] = z1 + gbho[idx] = z1 +ax.plot(x, gbh, "r") +np.savetxt(os.path.join(workspace, "Ghyben-Herzberg.out"), gbho) +# fake figures +ax.plot([-100.0, -100], [-100.0, -100], "r", label="Ghyben-Herzberg") +ax.plot( + [-100.0, -100], + [-100.0, -100], + "bo", + markersize=3, + markeredgecolor="blue", + markerfacecolor="None", + label="SWI2", +) +# legend +leg = ax.legend(loc="lower left", numpoints=1) +leg._drawFrame = False +# axes +ax.set_ylim(-50, -9) +ax.set_xlabel("Horizontal distance, in meters") +ax.set_ylabel("Elevation, in meters") +ax.set_xlim(-250.0, 2500.0) +# - + +# Clean up the temporary workspace. + +try: + temp_dir.cleanup() +except (PermissionError, NotADirectoryError): + pass diff --git a/.docs/Notebooks/swi2package_example5.py b/.docs/Notebooks/swi2package_example5.py new file mode 100644 index 0000000000..1482352bb6 --- /dev/null +++ b/.docs/Notebooks/swi2package_example5.py @@ -0,0 +1,662 @@ +# --- +# jupyter: +# jupytext: +# notebook_metadata_filter: all +# text_representation: +# extension: .py +# format_name: light +# format_version: '1.5' +# jupytext_version: 1.15.0 +# kernelspec: +# display_name: Python 3 +# language: python +# name: python3 +# language_info: +# codemirror_mode: +# name: ipython +# version: 3 +# file_extension: .py +# mimetype: text/x-python +# name: python +# nbconvert_exporter: python +# pygments_lexer: ipython3 +# version: 3.11.2 +# metadata: +# section: mf2005 +# --- + +# # SWI2 Example 5. Radial Upconing Problem +# +# This example problem simulates transient movement of the freshwater-seawater interface in response to groundwater withdrawals and is based on the problem of Zhou and others (2005). The aquifer is 120-m thick, confined, storage changes are not considered (because all MODFLOW stress periods are steady-state), and the top and bottom of the aquifer are horizontal and impermeable. +# +# The domain is discretized into a radial flow domain, centered on a pumping well, having 113 columns, 1 row, and 6 layers, with respective cell dimensions of 25m (DELR), 1m (DELC), and 20m. A total of 8 years is simulated using two 4-year stress periods and a constant 1-day time step. +# +# The horizontal and vertical hydraulic conductivity of the aquifer are 21.7 and 8.5 m/d, respectively, and the effective porosity of the aquifer is 0.2. The hydraulic conductivity and effective porosity values applied in the model are based on the approach of logarithmic averaging of interblock transmissivity (LAYAVG=1) to better approximate analytical solutions for radial flow problems (Langevin, 2008). Constant heads were specified at the edge of the model domain (column 113) in all 6 layers, with specified freshwater heads of 0.0m. +# +# The groundwater is divided into a freshwater zone and a seawater zone, separated by an active ZETA surface, ζ2, between the zones (NSRF=1) that approximates the 50-percent seawater salinity contour. Fluid density is represented using the stratified density option (ISTRAT=1). The dimensionless density difference between freshwater and saltwater is 0.025. The tip and toe tracking parameters are a TOESLOPE and TIPSLOPE of 0.025, a default ALPHA of 0.1, and a default BETA of 0.1. Initially, the interface between freshwater and saltwater is at an elevation of -100m. The SWI2 ISOURCE parameter is set to 1 in the constant head cells in layers 1-5 and to 2 in the constant head cell in layer 6. This ensures that inflow from the constant head cells in model layers 1-5 and 6 is freshwater and saltwater, respectively. In all other cells, the SWI2 ISOURCE parameter was set to 0, indicating boundary conditions have water that is identical to water at the top of the aquifer. +# +# A pumping well screened from 0 to -20m with a withdrawal rate of 2,400 m3/d is simulated for stress period 1 at the left side of the model (column 1) in the first layer. To simulate recovery, the pumping well withdrawal rate was set to 0 m3/d for stress period 2. +# +# The simulated position of the interface is shown at 1-year increments for the withdrawal (stress period 1) and recovery (stress period 2) periods. During the withdrawal period, the interface steadily rises from its initial elevation of -100m to a maximum elevation of -87m after 4 years. During the recovery period, the interface elevation decreases to -96m at the end of year 5 but does not return to the initial elevation of -100m at the end of year 8. + +# Import dependencies. + +# + +import math +import os +import sys +from tempfile import TemporaryDirectory + +import matplotlib as mpl +import matplotlib.pyplot as plt +import numpy as np + +import flopy + +print(sys.version) +print(f"numpy version: {np.__version__}") +print(f"matplotlib version: {mpl.__version__}") +print(f"flopy version: {flopy.__version__}") +# - + +# Modify default matplotlib settings. +updates = { + "font.family": ["Arial"], + "mathtext.default": "regular", + "pdf.compression": 0, + "pdf.fonttype": 42, + "legend.fontsize": 7, + "axes.labelsize": 8, + "xtick.labelsize": 7, + "ytick.labelsize": 7, +} +plt.rcParams.update(updates) + +# Create a temporary workspace. + +temp_dir = TemporaryDirectory() +workspace = temp_dir.name + +# Make working subdirectories. + +dirs = [os.path.join(workspace, "SWI2"), os.path.join(workspace, "SEAWAT")] +for d in dirs: + if not os.path.exists(d): + os.mkdir(d) + +# Define grid discretization. + +nlay = 6 +nrow = 1 +ncol = 113 +delr = np.zeros((ncol), float) +delc = 1.0 +r = np.zeros((ncol), float) +x = np.zeros((ncol), float) +edge = np.zeros((ncol), float) +dx = 25.0 +for i in range(0, ncol): + delr[i] = dx +r[0] = delr[0] / 2.0 +for i in range(1, ncol): + r[i] = r[i - 1] + (delr[i - 1] + delr[i]) / 2.0 +x[0] = delr[0] / 2.0 +for i in range(1, ncol): + x[i] = x[i - 1] + (delr[i - 1] + delr[i]) / 2.0 +edge[0] = delr[0] +for i in range(1, ncol): + edge[i] = edge[i - 1] + delr[i] + +# Define time discretization. + +nper = 2 +perlen = [1460, 1460] +nstp = [1460, 1460] +steady = True + +# Define more shared model parameters. + +# + +nsave_zeta = 8 +ndecay = 4 +ibound = np.ones((nlay, nrow, ncol), int) +for k in range(0, nlay): + ibound[k, 0, ncol - 1] = -1 +bot = np.zeros((nlay, nrow, ncol), float) +dz = 100.0 / float(nlay - 1) +zall = -np.arange(0, 100 + dz, dz) +zall = np.append(zall, -120.0) +tb = -np.arange(dz, 100 + dz, dz) +tb = np.append(tb, -120.0) +for k in range(0, nlay): + for i in range(0, ncol): + bot[k, 0, i] = tb[k] +isource = np.zeros((nlay, nrow, ncol), int) +isource[:, 0, ncol - 1] = 1 +isource[nlay - 1, 0, ncol - 1] = 2 + +khb = (0.0000000000256 * 1000.0 * 9.81 / 0.001) * 60 * 60 * 24 +kvb = (0.0000000000100 * 1000.0 * 9.81 / 0.001) * 60 * 60 * 24 +ssb = 1e-5 +sszb = 0.2 +kh = np.zeros((nlay, nrow, ncol), float) +kv = np.zeros((nlay, nrow, ncol), float) +ss = np.zeros((nlay, nrow, ncol), float) +ssz = np.zeros((nlay, nrow, ncol), float) +for k in range(0, nlay): + for i in range(0, ncol): + f = r[i] * 2.0 * math.pi + kh[k, 0, i] = khb * f + kv[k, 0, i] = kvb * f + ss[k, 0, i] = ssb * f + ssz[k, 0, i] = sszb * f +z = np.ones((nlay), float) +z = -100.0 * z + +nwell = 1 +for k in range(0, nlay): + if zall[k] > -20.0 and zall[k + 1] <= -20: + nwell = k + 1 +print(f"nlay={nlay} dz={dz} nwell={nwell}") +wellQ = -2400.0 +wellbtm = -20.0 +wellQpm = wellQ / abs(wellbtm) +well_data = {} +for ip in range(0, nper): + welllist = np.zeros((nwell, 4), float) + for iw in range(0, nwell): + if ip == 0: + b = zall[iw] - zall[iw + 1] + if zall[iw + 1] < wellbtm: + b = zall[iw] - wellbtm + q = wellQpm * b + else: + q = 0.0 + welllist[iw, 0] = iw + welllist[iw, 1] = 0 + welllist[iw, 2] = 0 + welllist[iw, 3] = q + well_data[ip] = welllist.copy() + +ihead = np.zeros((nlay), float) + +ocspd = {} +for i in range(0, nper): + icnt = 0 + for j in range(0, nstp[i]): + icnt += 1 + if icnt == 365: + ocspd[(i, j)] = ["save head"] + icnt = 0 + else: + ocspd[(i, j)] = [] + +solver2params = { + "mxiter": 100, + "iter1": 20, + "npcond": 1, + "zclose": 1.0e-6, + "rclose": 3e-3, + "relax": 1.0, + "nbpol": 2, + "damp": 1.0, + "dampt": 1.0, +} +# - + +# Define SWI2 model name and MODFLOW executable name. + +modelname = "swi2ex5" +mf_name = "mf2005" + +# Define the SWI2 model. + +ml = flopy.modflow.Modflow( + modelname, version="mf2005", exe_name=mf_name, model_ws=dirs[0] +) +discret = flopy.modflow.ModflowDis( + ml, + nrow=nrow, + ncol=ncol, + nlay=nlay, + delr=delr, + delc=delc, + top=0, + botm=bot, + laycbd=0, + nper=nper, + perlen=perlen, + nstp=nstp, + steady=steady, +) +bas = flopy.modflow.ModflowBas(ml, ibound=ibound, strt=ihead) +lpf = flopy.modflow.ModflowLpf( + ml, hk=kh, vka=kv, ss=ss, sy=ssz, vkcb=0, laytyp=0, layavg=1 +) +wel = flopy.modflow.ModflowWel(ml, stress_period_data=well_data) +swi = flopy.modflow.ModflowSwi2( + ml, + iswizt=55, + nsrf=1, + istrat=1, + toeslope=0.025, + tipslope=0.025, + nu=[0, 0.025], + zeta=z, + ssz=ssz, + isource=isource, + nsolver=2, + solver2params=solver2params, +) +oc = flopy.modflow.ModflowOc(ml, stress_period_data=ocspd) +pcg = flopy.modflow.ModflowPcg( + ml, hclose=1.0e-6, rclose=3.0e-3, mxiter=100, iter1=50 +) + +# Write input files and run the SWI2 model. + +ml.write_input() +success, buff = ml.run_model(silent=True, report=True) +assert success, "Failed to run." + +# Load model results. +get_stp = [364, 729, 1094, 1459, 364, 729, 1094, 1459] +get_per = [0, 0, 0, 0, 1, 1, 1, 1] +nswi_times = len(get_per) +zetafile = os.path.join(dirs[0], f"{modelname}.zta") +zobj = flopy.utils.CellBudgetFile(zetafile) +zeta = [] +for kk in zip(get_stp, get_per): + zeta.append(zobj.get_data(kstpkper=kk, text="ZETASRF 1")[0]) +zeta = np.array(zeta) + +# Redefine input data for SEAWAT. + +# + +nlay_swt = 120 +# mt3d print times +timprs = (np.arange(8) + 1) * 365.0 +nprs = len(timprs) +ndecay = 4 +ibound = np.ones((nlay_swt, nrow, ncol), "int") +for k in range(0, nlay_swt): + ibound[k, 0, ncol - 1] = -1 +bot = np.zeros((nlay_swt, nrow, ncol), float) +zall = [0, -20.0, -40.0, -60.0, -80.0, -100.0, -120.0] +dz = 120.0 / nlay_swt +tb = np.arange(nlay_swt) * -dz - dz +sconc = np.zeros((nlay_swt, nrow, ncol), float) +icbund = np.ones((nlay_swt, nrow, ncol), int) +strt = np.zeros((nlay_swt, nrow, ncol), float) +pressure = 0.0 +g = 9.81 +z = -dz / 2.0 # cell center +for k in range(0, nlay_swt): + for i in range(0, ncol): + bot[k, 0, i] = tb[k] + if bot[k, 0, 0] >= -100.0: + sconc[k, 0, :] = 0.0 / 3.0 * 0.025 * 1000.0 / 0.7143 + else: + sconc[k, 0, :] = 3.0 / 3.0 * 0.025 * 1000.0 / 0.7143 + icbund[k, 0, -1] = -1 + + dense = 1000.0 + 0.7143 * sconc[k, 0, 0] + pressure += 0.5 * dz * dense * g + if k > 0: + z = z - dz + denseup = 1000.0 + 0.7143 * sconc[k - 1, 0, 0] + pressure += 0.5 * dz * denseup * g + strt[k, 0, :] = z + pressure / dense / g + # print z, pressure, strt[k, 0, 0], sconc[k, 0, 0] + +khb = (0.0000000000256 * 1000.0 * 9.81 / 0.001) * 60 * 60 * 24 +kvb = (0.0000000000100 * 1000.0 * 9.81 / 0.001) * 60 * 60 * 24 +ssb = 1e-5 +sszb = 0.2 +kh = np.zeros((nlay_swt, nrow, ncol), float) +kv = np.zeros((nlay_swt, nrow, ncol), float) +ss = np.zeros((nlay_swt, nrow, ncol), float) +ssz = np.zeros((nlay_swt, nrow, ncol), float) +for k in range(0, nlay_swt): + for i in range(0, ncol): + f = r[i] * 2.0 * math.pi + kh[k, 0, i] = khb * f + kv[k, 0, i] = kvb * f + ss[k, 0, i] = ssb * f + ssz[k, 0, i] = sszb * f +# wells and ssm data +itype = flopy.mt3d.Mt3dSsm.itype_dict() +nwell = 1 +for k in range(0, nlay_swt): + if bot[k, 0, 0] >= -20.0: + nwell = k + 1 +print(f"nlay_swt={nlay_swt} dz={dz} nwell={nwell}") +well_data = {} +ssm_data = {} +wellQ = -2400.0 +wellbtm = -20.0 +wellQpm = wellQ / abs(wellbtm) +for ip in range(0, nper): + welllist = np.zeros((nwell, 4), float) + ssmlist = [] + for iw in range(0, nwell): + if ip == 0: + q = wellQpm * dz + else: + q = 0.0 + welllist[iw, 0] = iw + welllist[iw, 1] = 0 + welllist[iw, 2] = 0 + welllist[iw, 3] = q + ssmlist.append([iw, 0, 0, 0.0, itype["WEL"]]) + well_data[ip] = welllist.copy() + ssm_data[ip] = ssmlist +# - + +# Define model name and exe for SEAWAT model. + +modelname = "swi2ex5_swt" +swtexe_name = "swtv4" + +# Create the SEAWAT model. + +m = flopy.seawat.Seawat(modelname, exe_name=swtexe_name, model_ws=dirs[1]) +discret = flopy.modflow.ModflowDis( + m, + nrow=nrow, + ncol=ncol, + nlay=nlay_swt, + delr=delr, + delc=delc, + top=0, + botm=bot, + laycbd=0, + nper=nper, + perlen=perlen, + nstp=nstp, + steady=True, +) +bas = flopy.modflow.ModflowBas(m, ibound=ibound, strt=strt) +lpf = flopy.modflow.ModflowLpf( + m, hk=kh, vka=kv, ss=ss, sy=ssz, vkcb=0, laytyp=0, layavg=1 +) +wel = flopy.modflow.ModflowWel(m, stress_period_data=well_data) +oc = flopy.modflow.ModflowOc(m, save_every=365, save_types=["save head"]) +pcg = flopy.modflow.ModflowPcg( + m, hclose=1.0e-5, rclose=3.0e-3, mxiter=100, iter1=50 +) +# Create the basic MT3DMS model data +adv = flopy.mt3d.Mt3dAdv( + m, + mixelm=-1, + percel=0.5, + nadvfd=0, + # 0 or 1 is upstream; 2 is central in space + # particle based methods + nplane=4, + mxpart=1e7, + itrack=2, + dceps=1e-4, + npl=16, + nph=16, + npmin=8, + npmax=256, +) +btn = flopy.mt3d.Mt3dBtn( + m, + icbund=icbund, + prsity=ssz, + ncomp=1, + sconc=sconc, + ifmtcn=-1, + chkmas=False, + nprobs=10, + nprmas=10, + dt0=1.0, + ttsmult=1.0, + nprs=nprs, + timprs=timprs, + mxstrn=1e8, +) +dsp = flopy.mt3d.Mt3dDsp(m, al=0.0, trpt=1.0, trpv=1.0, dmcoef=0.0) +gcg = flopy.mt3d.Mt3dGcg(m, mxiter=1, iter1=50, isolve=1, cclose=1e-7) +ssm = flopy.mt3d.Mt3dSsm(m, stress_period_data=ssm_data) +# Create the SEAWAT model data +vdf = flopy.seawat.SeawatVdf( + m, + iwtable=0, + densemin=0, + densemax=0, + denseref=1000.0, + denseslp=0.7143, + firstdt=1e-3, +) + +# Write SEAWAT model input files. + +m.write_input() + +# Run the SEAWAT model. + +success, buff = m.run_model(silent=True, report=True) +assert success, "Failed to run." + +# Load SEAWAT model results. + +ucnfile = os.path.join(dirs[1], "MT3D001.UCN") +uobj = flopy.utils.UcnFile(ucnfile) +times = uobj.get_times() +print(times) +conc = np.zeros((len(times), nlay_swt, ncol), float) +for idx, tt in enumerate(times): + c = uobj.get_data(totim=tt) + for ilay in range(0, nlay_swt): + for jcol in range(0, ncol): + conc[idx, ilay, jcol] = c[ilay, 0, jcol] + +# Define some spatial data for plots. + +# + +# swi2 +bot = np.zeros((1, ncol, nlay), float) +dz = 100.0 / float(nlay - 1) +zall = -np.arange(0, 100 + dz, dz) +zall = np.append(zall, -120.0) +tb = -np.arange(dz, 100 + dz, dz) +tb = np.append(tb, -120.0) +for k in range(0, nlay): + for i in range(0, ncol): + bot[0, i, k] = tb[k] +# seawat +swt_dz = 120.0 / nlay_swt +swt_tb = np.zeros((nlay_swt), float) +zc = -swt_dz / 2.0 +for klay in range(0, nlay_swt): + swt_tb[klay] = zc + zc -= swt_dz +X, Z = np.meshgrid(x, swt_tb) +# - + +# Plot results. + +# + +fwid, fhgt = 6.5, 6.5 +flft, frgt, fbot, ftop = 0.125, 0.95, 0.125, 0.925 + +eps = 1.0e-3 + +lc = ["r", "c", "g", "b", "k"] +cfig = ["A", "B", "C", "D", "E", "F", "G", "H"] +inc = 1.0e-3 + +xsf, axes = plt.subplots(4, 2, figsize=(fwid, fhgt), facecolor="w") +xsf.subplots_adjust( + wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop +) +# withdrawal and recovery titles +ax = axes.flatten()[0] +ax.text( + 0.0, + 1.03, + "Withdrawal", + transform=ax.transAxes, + va="bottom", + ha="left", + size="8", +) +ax = axes.flatten()[1] +ax.text( + 0.0, + 1.03, + "Recovery", + transform=ax.transAxes, + va="bottom", + ha="left", + size="8", +) +# dummy items for legend +ax = axes.flatten()[2] +ax.plot( + [-1, -1], + [-1, -1], + "bo", + markersize=3, + markeredgecolor="blue", + markerfacecolor="None", + label="SWI2 interface", +) +ax.plot( + [-1, -1], + [-1, -1], + color="k", + linewidth=0.75, + linestyle="solid", + label="SEAWAT 50% seawater", +) +ax.plot( + [-1, -1], + [-1, -1], + marker="s", + color="k", + linewidth=0, + linestyle="none", + markeredgecolor="w", + markerfacecolor="0.75", + label="SEAWAT 5-95% seawater", +) +leg = ax.legend( + loc="upper left", + numpoints=1, + ncol=1, + labelspacing=0.5, + borderaxespad=1, + handlelength=3, +) +leg._drawFrame = False +# data items +for itime in range(0, nswi_times): + zb = np.zeros((ncol), float) + zs = np.zeros((ncol), float) + for icol in range(0, ncol): + for klay in range(0, nlay): + # top and bottom of layer + ztop = float(f"{zall[klay]:10.3e}") + zbot = float(f"{zall[klay + 1]:10.3e}") + # fresh-salt zeta surface + zt = zeta[itime, klay, 0, icol] + if (ztop - zt) > eps: + zs[icol] = zt + if itime < ndecay: + ic = itime + isp = ic * 2 + ax = axes.flatten()[isp] + else: + ic = itime - ndecay + isp = (ic * 2) + 1 + ax = axes.flatten()[isp] + # figure title + ax.text( + -0.15, + 1.025, + cfig[itime], + transform=ax.transAxes, + va="center", + ha="center", + size="8", + ) + + # swi2 + ax.plot( + x, + zs, + "bo", + markersize=3, + markeredgecolor="blue", + markerfacecolor="None", + label="_None", + ) + + # seawat + sc = ax.contour( + X, + Z, + conc[itime, :, :], + levels=[17.5], + colors="k", + linestyles="solid", + linewidths=0.75, + zorder=30, + ) + cc = ax.contourf( + X, + Z, + conc[itime, :, :], + levels=[0.0, 1.75, 33.250], + colors=["w", "0.75", "w"], + ) + # set graph limits + ax.set_xlim(0, 500) + ax.set_ylim(-100, -65) + if itime < ndecay: + ax.set_ylabel("Elevation, in meters") + +# x labels +ax = axes.flatten()[6] +ax.set_xlabel("Horizontal distance, in meters") +ax = axes.flatten()[7] +ax.set_xlabel("Horizontal distance, in meters") + +# simulation time titles +for itime in range(0, nswi_times): + if itime < ndecay: + ic = itime + isp = ic * 2 + ax = axes.flatten()[isp] + else: + ic = itime - ndecay + isp = (ic * 2) + 1 + ax = axes.flatten()[isp] + iyr = itime + 1 + if iyr > 1: + ctxt = f"{iyr} years" + else: + ctxt = f"{iyr} year" + ax.text( + 0.95, + 0.925, + ctxt, + transform=ax.transAxes, + va="top", + ha="right", + size="8", + ) + +plt.show() +# - + +# Clean up the temporary workspace. + +try: + temp_dir.cleanup() +except (PermissionError, NotADirectoryError): + pass diff --git a/.docs/create_rstfiles.py b/.docs/create_rstfiles.py index d798fa801e..e6e7c5a1c8 100644 --- a/.docs/create_rstfiles.py +++ b/.docs/create_rstfiles.py @@ -88,7 +88,7 @@ def create_examples_rst(): print(f"Creating {rst_path}") with open(rst_path, "w") as rst_file: - rst_file.write("Examples Gallery\n================\n\n") + rst_file.write("Examples gallery\n================\n\n") rst_file.write( "The following examples illustrate the functionality of Flopy. After the `tutorials `_, the examples are the best resource for learning the underlying capabilities of FloPy.\n\n" ) diff --git a/.docs/examples.rst b/.docs/examples.rst index 5f6ceed6e1..14284e0d76 100644 --- a/.docs/examples.rst +++ b/.docs/examples.rst @@ -1,4 +1,4 @@ -Examples Gallery +Examples gallery ================ The following examples illustrate the functionality of Flopy. After the `tutorials `_, the examples are the best resource for learning the underlying capabilities of FloPy. @@ -85,7 +85,10 @@ MODFLOW-2005/MODFLOW-NWT examples Notebooks/modflow_postprocessing_example Notebooks/sfrpackage_example Notebooks/swi2package_example1 + Notebooks/swi2package_example2 + Notebooks/swi2package_example3 Notebooks/swi2package_example4 + Notebooks/swi2package_example5 Notebooks/uzf_example diff --git a/.docs/main.rst b/.docs/main.rst index a2568d3fce..8b4fb30a27 100644 --- a/.docs/main.rst +++ b/.docs/main.rst @@ -15,12 +15,13 @@ Contents: :maxdepth: 2 introduction - md/version_changes - md/optional_dependencies tutorials examples utilities faq + md/model_checks + md/optional_dependencies + md/version_changes code Indices and tables diff --git a/.docs/utilities.rst b/.docs/utilities.rst index 0bb0730e1d..503d3fd174 100644 --- a/.docs/utilities.rst +++ b/.docs/utilities.rst @@ -1,5 +1,5 @@ -Utilities -========= +Command line utilities +====================== The FloPy package installs several command-line utilities that can be used to help with installing MODFLOW 6 and other binaries, and regenerate FloPy class @@ -10,4 +10,3 @@ definitions for compatibility with MODFLOW 6. md/generate_classes md/get_modflow - md/model_checks diff --git a/autotest/test_example_scripts.py b/autotest/test_example_scripts.py deleted file mode 100644 index 8fd95d9e5a..0000000000 --- a/autotest/test_example_scripts.py +++ /dev/null @@ -1,44 +0,0 @@ -import re -from functools import reduce -from pprint import pprint - -import pytest -from autotest.conftest import get_project_root_path -from modflow_devtools.misc import run_py_script - - -def get_example_scripts(exclude=None): - prjroot = get_project_root_path() - - # sort for pytest-xdist: workers must collect tests in the same order - return sorted( - reduce( - lambda a, b: a + b, - [ - [ - str(p) - for p in d.rglob("*.py") - if (p.name not in exclude if exclude else True) - ] - for d in [ - prjroot / "examples" / "scripts", - ] - ], - [], - ) - ) - - -@pytest.mark.slow -@pytest.mark.example -@pytest.mark.parametrize("script", get_example_scripts()) -def test_scripts(script): - stdout, stderr, returncode = run_py_script(script, verbose=True) - if returncode != 0: - if "Missing optional dependency" in stderr: - pkg = re.findall("Missing optional dependency '(.*)'", stderr)[0] - pytest.skip(f"script requires optional dependency {pkg!r}") - - assert returncode == 0 - pprint(stdout) - pprint(stderr) diff --git a/examples/scripts/flopy_henry.py b/examples/scripts/flopy_henry.py deleted file mode 100644 index a746ab1204..0000000000 --- a/examples/scripts/flopy_henry.py +++ /dev/null @@ -1,221 +0,0 @@ -import os -import sys -from tempfile import TemporaryDirectory - -import matplotlib.pyplot as plt -import numpy as np - -import flopy - - -def run(workspace, quiet): - fext = "png" - narg = len(sys.argv) - iarg = 0 - if narg > 1: - while iarg < narg - 1: - iarg += 1 - basearg = sys.argv[iarg].lower() - if basearg == "--pdf": - fext = "pdf" - - # Input variables for the Henry Problem - Lx = 2.0 - Lz = 1.0 - nlay = 50 - nrow = 1 - ncol = 100 - delr = Lx / ncol - delc = 1.0 - delv = Lz / nlay - henry_top = 1.0 - henry_botm = np.linspace(henry_top - delv, 0.0, nlay) - qinflow = 5.702 # m3/day - dmcoef = 0.57024 # m2/day Could also try 1.62925 as another case of the Henry problem - hk = 864.0 # m/day - - # Create the basic MODFLOW model data - modelname = "flopy_henry" - m = flopy.seawat.Seawat(modelname, exe_name="swtv4", model_ws=workspace) - - # Add DIS package to the MODFLOW model - dis = flopy.modflow.ModflowDis( - m, - nlay, - nrow, - ncol, - nper=1, - delr=delr, - delc=delc, - laycbd=0, - top=henry_top, - botm=henry_botm, - perlen=1.5, - nstp=15, - ) - - # Variables for the BAS package - ibound = np.ones((nlay, nrow, ncol), dtype=np.int32) - ibound[:, :, -1] = -1 - bas = flopy.modflow.ModflowBas(m, ibound, 0) - - # Add LPF package to the MODFLOW model - lpf = flopy.modflow.ModflowLpf(m, hk=hk, vka=hk, ipakcb=53) - - # Add PCG Package to the MODFLOW model - pcg = flopy.modflow.ModflowPcg(m, hclose=1.0e-8) - - # Add OC package to the MODFLOW model - oc = flopy.modflow.ModflowOc( - m, - stress_period_data={(0, 0): ["save head", "save budget"]}, - compact=True, - ) - - # Create WEL and SSM data - itype = flopy.mt3d.Mt3dSsm.itype_dict() - wel_data = {} - ssm_data = {} - wel_sp1 = [] - ssm_sp1 = [] - for k in range(nlay): - wel_sp1.append([k, 0, 0, qinflow / nlay]) - ssm_sp1.append([k, 0, 0, 0.0, itype["WEL"]]) - ssm_sp1.append([k, 0, ncol - 1, 35.0, itype["BAS6"]]) - wel_data[0] = wel_sp1 - ssm_data[0] = ssm_sp1 - wel = flopy.modflow.ModflowWel(m, stress_period_data=wel_data) - - # Create the basic MT3DMS model data - btn = flopy.mt3d.Mt3dBtn( - m, - nprs=-5, - prsity=0.35, - sconc=35.0, - ifmtcn=0, - chkmas=False, - nprobs=10, - nprmas=10, - dt0=0.001, - ) - adv = flopy.mt3d.Mt3dAdv(m, mixelm=0) - dsp = flopy.mt3d.Mt3dDsp(m, al=0.0, trpt=1.0, trpv=1.0, dmcoef=dmcoef) - gcg = flopy.mt3d.Mt3dGcg(m, iter1=500, mxiter=1, isolve=1, cclose=1e-7) - ssm = flopy.mt3d.Mt3dSsm(m, stress_period_data=ssm_data) - - # Create the SEAWAT model data - vdf = flopy.seawat.SeawatVdf( - m, - iwtable=0, - densemin=0, - densemax=0, - denseref=1000.0, - denseslp=0.7143, - firstdt=1e-3, - ) - - # Write the input files - m.write_input() - - # Try to delete the output files, to prevent accidental use of older files - try: - os.remove(os.path.join(workspace, "MT3D001.UCN")) - os.remove(os.path.join(workspace, f"{modelname}.hds")) - os.remove(os.path.join(workspace, f"{modelname}.cbc")) - except: - pass - - # run the model - m.run_model(silent=quiet) - - # Post-process the results - - # Load data - ucnobj = flopy.utils.binaryfile.UcnFile( - os.path.join(workspace, "MT3D001.UCN"), model=m - ) - times = ucnobj.get_times() - concentration = ucnobj.get_data(totim=times[-1]) - cbbobj = flopy.utils.binaryfile.CellBudgetFile( - os.path.join(workspace, f"{modelname}.cbc") - ) - times = cbbobj.get_times() - qx = cbbobj.get_data(text="flow right face", totim=times[-1])[0] - qz = cbbobj.get_data(text="flow lower face", totim=times[-1])[0] - - # Average flows to cell centers - qx_avg = np.empty(qx.shape, dtype=qx.dtype) - qx_avg[:, :, 1:] = 0.5 * (qx[:, :, 0 : ncol - 1] + qx[:, :, 1:ncol]) - qx_avg[:, :, 0] = 0.5 * qx[:, :, 0] - qz_avg = np.empty(qz.shape, dtype=qz.dtype) - qz_avg[1:, :, :] = 0.5 * (qz[0 : nlay - 1, :, :] + qz[1:nlay, :, :]) - qz_avg[0, :, :] = 0.5 * qz[0, :, :] - - # Make the plot - # import matplotlib.pyplot as plt - fig = plt.figure(figsize=(10, 10)) - ax = fig.add_subplot(1, 1, 1, aspect="equal") - ax.imshow( - concentration[:, 0, :], interpolation="nearest", extent=(0, Lx, 0, Lz) - ) - y, x, z = dis.get_node_coordinates() - X, Z = np.meshgrid(x, z[:, 0, 0]) - iskip = 3 - ax.quiver( - X[::iskip, ::iskip], - Z[::iskip, ::iskip], - qx_avg[::iskip, 0, ::iskip], - -qz_avg[::iskip, 0, ::iskip], - color="w", - scale=5, - headwidth=3, - headlength=2, - headaxislength=2, - width=0.0025, - ) - - outfig = os.path.join(workspace, f"henry_flows.{fext}") - fig.savefig(outfig, dpi=300) - print("created...", outfig) - - # Extract the heads - fname = os.path.join(workspace, f"{modelname}.hds") - headobj = flopy.utils.binaryfile.HeadFile(fname) - times = headobj.get_times() - head = headobj.get_data(totim=times[-1]) - - # Make a simple head plot - fig = plt.figure(figsize=(10, 10)) - ax = fig.add_subplot(1, 1, 1, aspect="equal") - im = ax.imshow( - head[:, 0, :], interpolation="nearest", extent=(0, Lx, 0, Lz) - ) - ax.set_title("Simulated Heads") - - outfig = os.path.join(workspace, f"henry_heads.{fext}") - fig.savefig(outfig, dpi=300) - print("created...", outfig) - - -if __name__ == "__main__": - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("--keep", help="output directory") - parser.add_argument( - "--quiet", action="store_false", help="don't show model output" - ) - args = vars(parser.parse_args()) - - workspace = args.get("keep", None) - quiet = args.get("quiet", False) - - if workspace is not None: - run(workspace, quiet) - else: - try: - with TemporaryDirectory() as workspace: - run(workspace, quiet) - except (PermissionError, NotADirectoryError): - # can occur on windows: https://docs.python.org/3/library/tempfile.html#tempfile.TemporaryDirectory - pass diff --git a/examples/scripts/flopy_lake_example.py b/examples/scripts/flopy_lake_example.py deleted file mode 100644 index 6134a0d9e5..0000000000 --- a/examples/scripts/flopy_lake_example.py +++ /dev/null @@ -1,163 +0,0 @@ -import os -import sys -from pathlib import Path -from tempfile import TemporaryDirectory - -import matplotlib.pyplot as plt -import numpy as np - -import flopy - - -def run(workspace: Path, quiet=False): - fext = "png" - narg = len(sys.argv) - iarg = 0 - if narg > 1: - while iarg < narg - 1: - iarg += 1 - basearg = sys.argv[iarg].lower() - if basearg == "--pdf": - fext = "pdf" - - # We are creating a square model with a specified head equal to `h1` along all boundaries. - # The head at the cell in the center in the top layer is fixed to `h2`. First, set the name - # of the model and the parameters of the model: the number of layers `Nlay`, the number of rows - # and columns `N`, lengths of the sides of the model `L`, aquifer thickness `H`, hydraulic - # conductivity `Kh` - name = "lake_example" - h1 = 100 - h2 = 90 - Nlay = 10 - N = 101 - L = 400.0 - H = 50.0 - Kh = 1.0 - - # Create a MODFLOW model and store it (in this case in the variable `ml`, but you can call it - # whatever you want). The modelname will be the name given to all MODFLOW files (input and output). - # The exe_name should be the full path to your MODFLOW executable. The version is either 'mf2k' - # for MODFLOW2000 or 'mf2005'for MODFLOW2005. - ml = flopy.modflow.Modflow( - modelname=name, exe_name="mf2005", version="mf2005", model_ws=workspace - ) - - # Define the discretization of the model. All layers are given equal thickness. The `bot` array - # is build from the `Hlay` values to indicate top and bottom of each layer, and `delrow` and - # `delcol` are computed from model size `L` and number of cells `N`. Once these are all computed, - # the Discretization file is built. - bot = np.linspace(-H / Nlay, -H, Nlay) - delrow = delcol = L / (N - 1) - dis = flopy.modflow.ModflowDis( - ml, - nlay=Nlay, - nrow=N, - ncol=N, - delr=delrow, - delc=delcol, - top=0.0, - botm=bot, - laycbd=0, - ) - - # Next we specify the boundary conditions and starting heads with the Basic package. The `ibound` - # array will be `1` in all cells in all layers, except for along the boundary and in the cell at - # the center in the top layer where it is set to `-1` to indicate fixed heads. The starting heads - # are used to define the heads in the fixed head cells (this is a steady simulation, so none of - # the other starting values matter). So we set the starting heads to `h1` everywhere, except for - # the head at the center of the model in the top layer. - Nhalf = int((N - 1) / 2) - ibound = np.ones((Nlay, N, N), dtype=int) - ibound[:, 0, :] = -1 - ibound[:, -1, :] = -1 - ibound[:, :, 0] = -1 - ibound[:, :, -1] = -1 - ibound[0, Nhalf, Nhalf] = -1 - - start = h1 * np.ones((N, N)) - start[Nhalf, Nhalf] = h2 - - # create external ibound array and starting head files - files = [] - hfile = workspace / f"{name}_strt.ref" - np.savetxt(hfile, start) - hfiles = [] - for kdx in range(Nlay): - file = str(Path(workspace) / f"{name}_ib{kdx + 1:02d}.ref") - files.append(file) - hfiles.append(hfile) - np.savetxt(file, ibound[kdx, :, :], fmt="%5d") - - bas = flopy.modflow.ModflowBas(ml, ibound=files, strt=hfiles) - - # The aquifer properties (really only the hydraulic conductivity) are defined with the - # LPF package. - lpf = flopy.modflow.ModflowLpf(ml, hk=Kh) - - # Finally, we need to specify the solver we want to use (PCG with default values), and the - # output control (using the default values). Then we are ready to write all MODFLOW input - # files and run MODFLOW. - pcg = flopy.modflow.ModflowPcg(ml) - oc = flopy.modflow.ModflowOc(ml) - ml.write_input() - ml.run_model(silent=quiet) - - # Once the model has terminated normally, we can read the heads file. First, a link to the heads - # file is created with `HeadFile`. The link can then be accessed with the `get_data` function, by - # specifying, in this case, the step number and period number for which we want to retrieve data. - # A three-dimensional array is returned of size `nlay, nrow, ncol`. Matplotlib contouring functions - # are used to make contours of the layers or a cross-section. - hds = flopy.utils.HeadFile(workspace / f"{name}.hds") - h = hds.get_data(kstpkper=(0, 0)) - x = y = np.linspace(0, L, N) - c = plt.contour(x, y, h[0], np.arange(90, 100.1, 0.2)) - plt.clabel(c, fmt="%2.1f") - plt.axis("scaled") - - outfig = workspace / f"lake1.{fext}" - fig = plt.gcf() - fig.savefig(outfig, dpi=300) - print("created...", outfig) - - x = y = np.linspace(0, L, N) - c = plt.contour(x, y, h[-1], np.arange(90, 100.1, 0.2)) - plt.clabel(c, fmt="%1.1f") - plt.axis("scaled") - - outfig = workspace / f"lake2.{fext}" - fig = plt.gcf() - fig.savefig(outfig, dpi=300) - print("created...", outfig) - - z = np.linspace(-H / Nlay / 2, -H + H / Nlay / 2, Nlay) - c = plt.contour(x, z, h[:, 50, :], np.arange(90, 100.1, 0.2)) - plt.axis("scaled") - - outfig = workspace / f"lake3.{fext}" - fig = plt.gcf() - fig.savefig(outfig, dpi=300) - print("created...", outfig) - - -if __name__ == "__main__": - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("--keep", help="output directory") - parser.add_argument( - "--quiet", action="store_false", help="don't show model output" - ) - args = vars(parser.parse_args()) - - workspace = args.get("keep", None) - quiet = args.get("quiet", False) - - if workspace is not None: - run(Path(workspace), quiet) - else: - try: - with TemporaryDirectory() as workspace: - run(Path(workspace), quiet) - except (PermissionError, NotADirectoryError): - # can occur on windows: https://docs.python.org/3/library/tempfile.html#tempfile.TemporaryDirectory - pass diff --git a/examples/scripts/flopy_swi2_ex1.py b/examples/scripts/flopy_swi2_ex1.py deleted file mode 100644 index dea979d780..0000000000 --- a/examples/scripts/flopy_swi2_ex1.py +++ /dev/null @@ -1,288 +0,0 @@ -import math -import os -import sys -from tempfile import TemporaryDirectory - -import matplotlib.pyplot as plt -import numpy as np - -import flopy - -# --modify default matplotlib settings -updates = { - "font.family": ["Univers 57 Condensed", "Arial"], - "mathtext.default": "regular", - "pdf.compression": 0, - "pdf.fonttype": 42, - "legend.fontsize": 7, - "axes.labelsize": 8, - "xtick.labelsize": 7, - "ytick.labelsize": 7, -} -plt.rcParams.update(updates) - - -def run(workspace, quiet): - cleanFiles = False - fext = "png" - narg = len(sys.argv) - iarg = 0 - if narg > 1: - while iarg < narg - 1: - iarg += 1 - basearg = sys.argv[iarg].lower() - if basearg == "--clean": - cleanFiles = True - elif basearg == "--pdf": - fext = "pdf" - - if cleanFiles: - print("cleaning all files") - print("excluding *.py files") - files = os.listdir(workspace) - for f in files: - if os.path.isdir(f): - continue - if ".py" != os.path.splitext(f)[1].lower(): - print(f" removing...{os.path.basename(f)}") - os.remove(os.path.join(workspace, f)) - return 1 - - modelname = "swiex1" - exe_name = "mf2005" - - nlay = 1 - nrow = 1 - ncol = 50 - - delr = 5.0 - delc = 1.0 - - ibound = np.ones((nrow, ncol), int) - ibound[0, -1] = -1 - - # create initial zeta surface - z = np.zeros((nrow, ncol), float) - z[0, 16:24] = np.arange(-2.5, -40, -5) - z[0, 24:] = -40 - z = [z] - # create isource for SWI2 - isource = np.ones((nrow, ncol), int) - isource[0, 0] = 2 - - ocdict = {} - for idx in range(49, 200, 50): - key = (0, idx) - ocdict[key] = ["save head", "save budget"] - key = (0, idx + 1) - ocdict[key] = [] - - # create flopy modflow object - ml = flopy.modflow.Modflow( - modelname, version="mf2005", exe_name=exe_name, model_ws=workspace - ) - # create flopy modflow package objects - discret = flopy.modflow.ModflowDis( - ml, - nlay=nlay, - nrow=nrow, - ncol=ncol, - delr=delr, - delc=delc, - top=0, - botm=[-40.0], - perlen=400, - nstp=200, - ) - bas = flopy.modflow.ModflowBas(ml, ibound=ibound, strt=0.0) - lpf = flopy.modflow.ModflowLpf( - ml, hk=2.0, vka=2.0, vkcb=0, laytyp=0, layavg=0 - ) - wel = flopy.modflow.ModflowWel(ml, stress_period_data={0: [(0, 0, 0, 1)]}) - swi = flopy.modflow.ModflowSwi2( - ml, - iswizt=55, - npln=1, - istrat=1, - toeslope=0.2, - tipslope=0.2, - nu=[0, 0.025], - zeta=z, - ssz=0.2, - isource=isource, - nsolver=1, - ) - oc = flopy.modflow.ModflowOc(ml, stress_period_data=ocdict) - pcg = flopy.modflow.ModflowPcg(ml) - # create model files - ml.write_input() - # run the model - m = ml.run_model(silent=quiet) - - # read model heads - headfile = os.path.join(workspace, f"{modelname}.hds") - hdobj = flopy.utils.HeadFile(headfile) - head = hdobj.get_alldata() - head = np.array(head) - # read model zeta - zetafile = os.path.join(workspace, f"{modelname}.zta") - zobj = flopy.utils.CellBudgetFile(zetafile) - zkstpkper = zobj.get_kstpkper() - zeta = [] - for kk in zkstpkper: - zeta.append(zobj.get_data(kstpkper=kk, text=" ZETASRF 1")[0]) - zeta = np.array(zeta) - - x = np.arange(0.5 * delr, ncol * delr, delr) - - # Wilson and Sa Da Costa - k = 2.0 - n = 0.2 - nu = 0.025 - H = 40.0 - tzero = H * n / (k * nu) / 4.0 - Ltoe = np.zeros(4) - v = 0.125 - t = np.arange(100, 500, 100) - - fwid = 7.00 - fhgt = 3.50 - flft = 0.125 - frgt = 0.95 - fbot = 0.125 - ftop = 0.925 - - fig = plt.figure(figsize=(fwid, fhgt), facecolor="w") - fig.subplots_adjust( - wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop - ) - - ax = fig.add_subplot(211) - ax.text( - -0.075, - 1.05, - "A", - transform=ax.transAxes, - va="center", - ha="center", - size="8", - ) - ax.plot([80, 120], [0, -40], "k") - ax.set_xlim(0, 250) - ax.set_ylim(-40, 0) - ax.set_yticks(np.arange(-40, 1, 10)) - ax.text(50, -10, "salt") - ax.text(130, -10, "fresh") - a = ax.annotate( - "", - xy=(50, -25), - xytext=(30, -25), - arrowprops=dict(arrowstyle="->", fc="k"), - ) - ax.text( - 40, -22, "groundwater flow velocity=0.125 m/d", ha="center", size=7 - ) - ax.set_ylabel("Elevation, in meters") - - ax = fig.add_subplot(212) - ax.text( - -0.075, - 1.05, - "B", - transform=ax.transAxes, - va="center", - ha="center", - size="8", - ) - - for i in range(4): - Ltoe[i] = H * math.sqrt(k * nu * (t[i] + tzero) / n / H) - ax.plot( - [100 - Ltoe[i] + v * t[i], 100 + Ltoe[i] + v * t[i]], - [0, -40], - "k", - label="_None", - ) - - for i in range(4): - zi = zeta[i, 0, 0, :] - p = (zi < 0) & (zi > -39.9) - ax.plot( - x[p], - zeta[i, 0, 0, p], - "bo", - markersize=3, - markeredgecolor="blue", - markerfacecolor="None", - label="_None", - ) - ipos = 0 - for jdx, t in enumerate(zeta[i, 0, 0, :]): - if t > -39.9: - ipos = jdx - ax.text( - x[ipos], - -37.75, - f"{(i + 1) * 100} days", - size=5, - ha="left", - va="center", - ) - - # fake items for labels - ax.plot([-100.0, -100], [-100.0, -100], "k", label="Analytical solution") - ax.plot( - [-100.0, -100], - [-100.0, -100], - "bo", - markersize=3, - markeredgecolor="blue", - markerfacecolor="None", - label="SWI2", - ) - # legend - leg = ax.legend(loc="upper right", numpoints=1) - leg._drawFrame = False - # axes - ax.set_xlim(0, 250) - ax.set_ylim(-40, 0) - ax.set_yticks(np.arange(-40, 1, 10)) - a = ax.annotate( - "", - xy=(50, -25), - xytext=(30, -25), - arrowprops=dict(arrowstyle="->", fc="k"), - ) - ax.text( - 40, -22, "groundwater flow velocity=0.125 m/d", ha="center", size=7 - ) - ax.set_ylabel("Elevation, in meters") - ax.set_xlabel("Horizontal distance, in meters") - - outfig = os.path.join(workspace, f"Figure06_swi2ex1.{fext}") - fig.savefig(outfig, dpi=300) - print("created...", outfig) - - -if __name__ == "__main__": - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("--keep", help="output directory") - parser.add_argument( - "--quiet", action="store_false", help="don't show model output" - ) - args = vars(parser.parse_args()) - - workspace = args.get("keep", None) - quiet = args.get("quiet", False) - - if workspace is not None: - run(workspace, quiet) - else: - try: - with TemporaryDirectory() as workspace: - run(workspace, quiet) - except (PermissionError, NotADirectoryError): - # can occur on windows: https://docs.python.org/3/library/tempfile.html#tempfile.TemporaryDirectory - pass diff --git a/examples/scripts/flopy_swi2_ex2.py b/examples/scripts/flopy_swi2_ex2.py deleted file mode 100644 index 64993d2bfa..0000000000 --- a/examples/scripts/flopy_swi2_ex2.py +++ /dev/null @@ -1,497 +0,0 @@ -import os -import sys -from pathlib import Path -from tempfile import TemporaryDirectory - -import matplotlib.pyplot as plt -import numpy as np - -import flopy - -# --modify default matplotlib settings -updates = { - "font.family": ["Univers 57 Condensed", "Arial"], - "mathtext.default": "regular", - "pdf.compression": 0, - "pdf.fonttype": 42, - "legend.fontsize": 7, - "axes.labelsize": 8, - "xtick.labelsize": 7, - "ytick.labelsize": 7, -} -plt.rcParams.update(updates) - - -def run(workspace, quiet): - cleanFiles = False - skipRuns = False - fext = "png" - narg = len(sys.argv) - iarg = 0 - if narg > 1: - while iarg < narg - 1: - iarg += 1 - basearg = sys.argv[iarg].lower() - if basearg == "--clean": - cleanFiles = True - elif basearg == "--skipruns": - skipRuns = True - elif basearg == "--pdf": - fext = "pdf" - - dirs = [os.path.join(workspace, "SWI2"), os.path.join(workspace, "SEAWAT")] - - if cleanFiles: - print("cleaning all files") - print("excluding *.py files") - file_dict = {} - file_dict[0] = os.listdir(dirs[0]) - file_dict[1] = os.listdir(dirs[1]) - file_dict[-1] = os.listdir(workspace) - for key, files in list(file_dict.items()): - pth = "." - if key >= 0: - pth = dirs[key] - for f in files: - fpth = os.path.join(pth, f) - if os.path.isdir(fpth): - continue - if ".py" != os.path.splitext(f)[1].lower(): - print(f" removing...{os.path.basename(f)}") - try: - os.remove(fpth) - except: - pass - for d in dirs: - if os.path.exists(d): - os.rmdir(d) - return 1 - - # make working directories - for d in dirs: - if not os.path.exists(d): - os.mkdir(d) - - modelname = "swiex2" - mf_name = "mf2005" - - # problem data - nper = 1 - perlen = 2000 - nstp = 1000 - nlay, nrow, ncol = 1, 1, 60 - delr = 5.0 - nsurf = 2 - x = np.arange(0.5 * delr, ncol * delr, delr) - xedge = np.linspace(0, float(ncol) * delr, len(x) + 1) - ibound = np.ones((nrow, ncol), int) - ibound[0, 0] = -1 - # swi2 data - z0 = np.zeros((nlay, nrow, ncol), float) - z1 = np.zeros((nlay, nrow, ncol), float) - z0[0, 0, 30:38] = np.arange(-2.5, -40, -5) - z0[0, 0, 38:] = -40 - z1[0, 0, 22:30] = np.arange(-2.5, -40, -5) - z1[0, 0, 30:] = -40 - z = [] - z.append(z0) - z.append(z1) - ssz = 0.2 - isource = np.ones((nrow, ncol), "int") - isource[0, 0] = 2 - - # stratified model - modelname = "swiex2_strat" - print("creating...", modelname) - ml = flopy.modflow.Modflow( - modelname, version="mf2005", exe_name=mf_name, model_ws=dirs[0] - ) - discret = flopy.modflow.ModflowDis( - ml, - nlay=1, - ncol=ncol, - nrow=nrow, - delr=delr, - delc=1, - top=0, - botm=[-40.0], - nper=nper, - perlen=perlen, - nstp=nstp, - ) - bas = flopy.modflow.ModflowBas(ml, ibound=ibound, strt=0.05) - bcf = flopy.modflow.ModflowBcf(ml, laycon=0, tran=2 * 40) - swi = flopy.modflow.ModflowSwi2( - ml, - iswizt=55, - nsrf=nsurf, - istrat=1, - toeslope=0.2, - tipslope=0.2, - nu=[0, 0.0125, 0.025], - zeta=z, - ssz=ssz, - isource=isource, - nsolver=1, - ) - oc = flopy.modflow.ModflowOc( - ml, stress_period_data={(0, 999): ["save head"]} - ) - pcg = flopy.modflow.ModflowPcg(ml) - ml.write_input() - # run stratified model - if not skipRuns: - m = ml.run_model(silent=quiet) - - # read stratified results - zetafile = os.path.join(dirs[0], f"{modelname}.zta") - zobj = flopy.utils.CellBudgetFile(zetafile) - zkstpkper = zobj.get_kstpkper() - zeta = zobj.get_data(kstpkper=zkstpkper[-1], text="ZETASRF 1")[0] - zeta2 = zobj.get_data(kstpkper=zkstpkper[-1], text="ZETASRF 2")[0] - # - # vd model - modelname = "swiex2_vd" - print("creating...", modelname) - ml = flopy.modflow.Modflow( - modelname, version="mf2005", exe_name=mf_name, model_ws=dirs[0] - ) - discret = flopy.modflow.ModflowDis( - ml, - nlay=1, - ncol=ncol, - nrow=nrow, - delr=delr, - delc=1, - top=0, - botm=[-40.0], - nper=nper, - perlen=perlen, - nstp=nstp, - ) - bas = flopy.modflow.ModflowBas(ml, ibound=ibound, strt=0.05) - bcf = flopy.modflow.ModflowBcf(ml, laycon=0, tran=2 * 40) - swi = flopy.modflow.ModflowSwi2( - ml, - iswizt=55, - nsrf=nsurf, - istrat=0, - toeslope=0.2, - tipslope=0.2, - nu=[0, 0, 0.025, 0.025], - zeta=z, - ssz=ssz, - isource=isource, - nsolver=1, - ) - oc = flopy.modflow.ModflowOc( - ml, stress_period_data={(0, 999): ["save head"]} - ) - pcg = flopy.modflow.ModflowPcg(ml) - ml.write_input() - # run vd model - if not skipRuns: - m = ml.run_model(silent=quiet) - - # read vd model data - zetafile = os.path.join(dirs[0], f"{modelname}.zta") - zobj = flopy.utils.CellBudgetFile(zetafile) - zkstpkper = zobj.get_kstpkper() - zetavd = zobj.get_data(kstpkper=zkstpkper[-1], text="ZETASRF 1")[0] - zetavd2 = zobj.get_data(kstpkper=zkstpkper[-1], text="ZETASRF 2")[0] - # - # seawat model - swtexe_name = "swtv4" - modelname = "swiex2_swt" - print("creating...", modelname) - swt_xmax = 300.0 - swt_zmax = 40.0 - swt_delr = 1.0 - swt_delc = 1.0 - swt_delz = 0.5 - swt_ncol = int(swt_xmax / swt_delr) # 300 - swt_nrow = 1 - swt_nlay = int(swt_zmax / swt_delz) # 80 - print(swt_nlay, swt_nrow, swt_ncol) - swt_ibound = np.ones((swt_nlay, swt_nrow, swt_ncol), int) - # swt_ibound[0, swt_ncol-1, 0] = -1 - swt_ibound[0, 0, 0] = -1 - swt_x = np.arange(0.5 * swt_delr, swt_ncol * swt_delr, swt_delr) - swt_xedge = np.linspace(0, float(ncol) * delr, len(swt_x) + 1) - swt_top = 0.0 - z0 = swt_top - swt_botm = np.zeros((swt_nlay), float) - swt_z = np.zeros((swt_nlay), float) - zcell = -swt_delz / 2.0 - for ilay in range(0, swt_nlay): - z0 -= swt_delz - swt_botm[ilay] = z0 - swt_z[ilay] = zcell - zcell -= swt_delz - # swt_X, swt_Z = np.meshgrid(swt_x, swt_botm) - swt_X, swt_Z = np.meshgrid(swt_x, swt_z) - # mt3d - # mt3d boundary array set to all active - icbund = np.ones((swt_nlay, swt_nrow, swt_ncol), int) - # create initial concentrations for MT3D - sconc = np.ones((swt_nlay, swt_nrow, swt_ncol), float) - sconcp = np.zeros((swt_nlay, swt_ncol), float) - xsb = 110 - xbf = 150 - for ilay in range(0, swt_nlay): - for icol in range(0, swt_ncol): - if swt_x[icol] > xsb: - sconc[ilay, 0, icol] = 0.5 - if swt_x[icol] > xbf: - sconc[ilay, 0, icol] = 0.0 - for icol in range(0, swt_ncol): - sconcp[ilay, icol] = sconc[ilay, 0, icol] - xsb += swt_delz - xbf += swt_delz - - # ssm data - itype = flopy.mt3d.Mt3dSsm.itype_dict() - ssm_data = {0: [0, 0, 0, 35.0, itype["BAS6"]]} - - # print sconcp - # mt3d print times - timprs = (np.arange(5) + 1) * 2000.0 - nprs = len(timprs) - # create the MODFLOW files - m = flopy.seawat.Seawat(modelname, exe_name=swtexe_name, model_ws=dirs[1]) - discret = flopy.modflow.ModflowDis( - m, - nrow=swt_nrow, - ncol=swt_ncol, - nlay=swt_nlay, - delr=swt_delr, - delc=swt_delc, - laycbd=0, - top=swt_top, - botm=swt_botm, - nper=nper, - perlen=perlen, - nstp=1, - steady=False, - ) - bas = flopy.modflow.ModflowBas(m, ibound=swt_ibound, strt=0.05) - lpf = flopy.modflow.ModflowLpf( - m, hk=2.0, vka=2.0, ss=0.0, sy=0.0, laytyp=0, layavg=0 - ) - oc = flopy.modflow.ModflowOc(m, save_every=1, save_types=["save head"]) - pcg = flopy.modflow.ModflowPcg(m) - # Create the MT3DMS model files - adv = flopy.mt3d.Mt3dAdv( - m, - mixelm=-1, # -1 is TVD - percel=0.05, - nadvfd=0, - # 0 or 1 is upstream; 2 is central in space - # particle based methods - nplane=4, - mxpart=1e7, - itrack=2, - dceps=1e-4, - npl=16, - nph=16, - npmin=8, - npmax=256, - ) - btn = flopy.mt3d.Mt3dBtn( - m, - icbund=1, - prsity=ssz, - sconc=sconc, - ifmtcn=-1, - chkmas=False, - nprobs=10, - nprmas=10, - dt0=0.0, - ttsmult=1.2, - ttsmax=100.0, - ncomp=1, - nprs=nprs, - timprs=timprs, - mxstrn=1e8, - ) - dsp = flopy.mt3d.Mt3dDsp(m, al=0.0, trpt=1.0, trpv=1.0, dmcoef=0.0) - gcg = flopy.mt3d.Mt3dGcg( - m, mxiter=1, iter1=50, isolve=3, cclose=1e-6, iprgcg=5 - ) - ssm = flopy.mt3d.Mt3dSsm(m, stress_period_data=ssm_data) - # Create the SEAWAT model files - vdf = flopy.seawat.SeawatVdf( - m, - nswtcpl=1, - iwtable=0, - densemin=0, - densemax=0, - denseref=1000.0, - denseslp=25.0, - firstdt=1.0e-03, - ) - m.write_input() - # run seawat model - if not skipRuns: - m = m.run_model(silent=quiet) - - # read seawat model data - ucnfile = os.path.join(dirs[1], "MT3D001.UCN") - uobj = flopy.utils.UcnFile(ucnfile) - times = uobj.get_times() - print(times) - ukstpkper = uobj.get_kstpkper() - print(ukstpkper) - c = uobj.get_data(totim=times[-1]) - conc = np.zeros((swt_nlay, swt_ncol), float) - for icol in range(0, swt_ncol): - for ilay in range(0, swt_nlay): - conc[ilay, icol] = c[ilay, 0, icol] - # - # figure - fwid = 7.0 # 6.50 - fhgt = 4.5 # 6.75 - flft = 0.125 - frgt = 0.95 - fbot = 0.125 - ftop = 0.925 - - print("creating cross-section figure...") - xsf, axes = plt.subplots(3, 1, figsize=(fwid, fhgt), facecolor="w") - xsf.subplots_adjust( - wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop - ) - # plot initial conditions - ax = axes[0] - ax.text( - -0.075, - 1.05, - "A", - transform=ax.transAxes, - va="center", - ha="center", - size="8", - ) - # text(.975, .1, '(a)', transform = ax.transAxes, va = 'center', ha = 'center') - ax.plot([110, 150], [0, -40], "k") - ax.plot([150, 190], [0, -40], "k") - ax.set_xlim(0, 300) - ax.set_ylim(-40, 0) - ax.set_yticks(np.arange(-40, 1, 10)) - ax.text(50, -20, "salt", va="center", ha="center") - ax.text(150, -20, "brackish", va="center", ha="center") - ax.text(250, -20, "fresh", va="center", ha="center") - ax.set_ylabel("Elevation, in meters") - # plot stratified swi2 and seawat results - ax = axes[1] - ax.text( - -0.075, - 1.05, - "B", - transform=ax.transAxes, - va="center", - ha="center", - size="8", - ) - # - zp = zeta[0, 0, :] - p = (zp < 0.0) & (zp > -40.0) - ax.plot(x[p], zp[p], "b", linewidth=1.5, drawstyle="steps-mid") - zp = zeta2[0, 0, :] - p = (zp < 0.0) & (zp > -40.0) - ax.plot(x[p], zp[p], "b", linewidth=1.5, drawstyle="steps-mid") - # seawat data - cc = ax.contour( - swt_X, - swt_Z, - conc, - levels=[0.25, 0.75], - colors="k", - linestyles="solid", - linewidths=0.75, - zorder=101, - ) - # fake figures - ax.plot([-100.0, -100], [-100.0, -100], "b", linewidth=1.5, label="SWI2") - ax.plot( - [-100.0, -100], [-100.0, -100], "k", linewidth=0.75, label="SEAWAT" - ) - # legend - leg = ax.legend(loc="lower left", numpoints=1) - leg._drawFrame = False - # axes - ax.set_xlim(0, 300) - ax.set_ylim(-40, 0) - ax.set_yticks(np.arange(-40, 1, 10)) - ax.set_ylabel("Elevation, in meters") - # plot vd model - ax = axes[2] - ax.text( - -0.075, - 1.05, - "C", - transform=ax.transAxes, - va="center", - ha="center", - size="8", - ) - dr = zeta[0, 0, :] - ax.plot(x, dr, "b", linewidth=1.5, drawstyle="steps-mid") - dr = zeta2[0, 0, :] - ax.plot(x, dr, "b", linewidth=1.5, drawstyle="steps-mid") - dr = zetavd[0, 0, :] - ax.plot(x, dr, "r", linewidth=0.75, drawstyle="steps-mid") - dr = zetavd2[0, 0, :] - ax.plot(x, dr, "r", linewidth=0.75, drawstyle="steps-mid") - # fake figures - ax.plot( - [-100.0, -100], - [-100.0, -100], - "b", - linewidth=1.5, - label="SWI2 stratified option", - ) - ax.plot( - [-100.0, -100], - [-100.0, -100], - "r", - linewidth=0.75, - label="SWI2 continuous option", - ) - # legend - leg = ax.legend(loc="lower left", numpoints=1) - leg._drawFrame = False - # axes - ax.set_xlim(0, 300) - ax.set_ylim(-40, 0) - ax.set_yticks(np.arange(-40, 1, 10)) - ax.set_xlabel("Horizontal distance, in meters") - ax.set_ylabel("Elevation, in meters") - - outfig = os.path.join(workspace, f"Figure07_swi2ex2.{fext}") - xsf.savefig(outfig, dpi=300) - print("created...", outfig) - - -if __name__ == "__main__": - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("--keep", help="output directory") - parser.add_argument( - "--quiet", action="store_false", help="don't show model output" - ) - args = vars(parser.parse_args()) - - workspace = args.get("keep", None) - quiet = args.get("quiet", False) - - if workspace is not None: - run(workspace, quiet) - else: - try: - with TemporaryDirectory() as workspace: - run(workspace, quiet) - except (PermissionError, NotADirectoryError): - # can occur on windows: https://docs.python.org/3/library/tempfile.html#tempfile.TemporaryDirectory - pass diff --git a/examples/scripts/flopy_swi2_ex3.py b/examples/scripts/flopy_swi2_ex3.py deleted file mode 100644 index b7f09cca03..0000000000 --- a/examples/scripts/flopy_swi2_ex3.py +++ /dev/null @@ -1,335 +0,0 @@ -import os -import sys -from tempfile import TemporaryDirectory - -import matplotlib.pyplot as plt -import numpy as np - -import flopy - -# --modify default matplotlib settings -updates = { - "font.family": ["Univers 57 Condensed", "Arial"], - "mathtext.default": "regular", - "pdf.compression": 0, - "pdf.fonttype": 42, - "legend.fontsize": 7, - "axes.labelsize": 8, - "xtick.labelsize": 7, - "ytick.labelsize": 7, -} -plt.rcParams.update(updates) - - -def MergeData(ndim, zdata, tb): - sv = 0.05 - md = np.empty((ndim), float) - md.fill(np.nan) - found = np.empty((ndim), bool) - found.fill(False) - for idx, layer in enumerate(zdata): - for jdx, z in enumerate(layer): - if found[jdx] == True: - continue - t0 = tb[idx][0] - sv - t1 = tb[idx][1] + sv - if z < t0 and z > t1: - md[jdx] = z - found[jdx] = True - return md - - -def LegBar(ax, x0, y0, t0, dx, dy, dt, cc): - for c in cc: - ax.plot([x0, x0 + dx], [y0, y0], color=c, linewidth=4) - ctxt = f"{t0:=3d} years" - ax.text(x0 + 2.0 * dx, y0 + dy / 2.0, ctxt, size=5) - y0 += dy - t0 += dt - return - - -def run(workspace, quiet): - cleanFiles = False - fext = "png" - narg = len(sys.argv) - iarg = 0 - if narg > 1: - while iarg < narg - 1: - iarg += 1 - basearg = sys.argv[iarg].lower() - if basearg == "--clean": - cleanFiles = True - elif basearg == "--pdf": - fext = "pdf" - - if cleanFiles: - print("cleaning all files") - print("excluding *.py files") - files = os.listdir(workspace) - for f in files: - fpth = os.path.join(workspace, f) - if os.path.isdir(fpth): - continue - if ".py" != os.path.splitext(f)[1].lower(): - print(f" removing...{os.path.basename(f)}") - try: - os.remove(fpth) - except: - pass - return 0 - - modelname = "swiex3" - exe_name = "mf2005" - - nlay = 3 - nrow = 1 - ncol = 200 - delr = 20.0 - delc = 1.0 - # well data - lrcQ1 = np.array([(0, 0, 199, 0.01), (2, 0, 199, 0.02)]) - lrcQ2 = np.array([(0, 0, 199, 0.01 * 0.5), (2, 0, 199, 0.02 * 0.5)]) - # ghb data - lrchc = np.zeros((30, 5)) - lrchc[:, [0, 1, 3, 4]] = [0, 0, 0.0, 0.8 / 2.0] - lrchc[:, 2] = np.arange(0, 30) - # swi2 data - zini = np.hstack( - (-9 * np.ones(24), np.arange(-9, -50, -0.5), -50 * np.ones(94)) - )[np.newaxis, :] - iso = np.zeros((1, 200), dtype=int) - iso[:, :30] = -2 - # model objects - ml = flopy.modflow.Modflow( - modelname, version="mf2005", exe_name=exe_name, model_ws=workspace - ) - discret = flopy.modflow.ModflowDis( - ml, - nrow=nrow, - ncol=ncol, - nlay=3, - delr=delr, - delc=delc, - laycbd=[0, 0, 0], - top=-9.0, - botm=[-29, -30, -50], - nper=2, - perlen=[365 * 1000, 1000 * 365], - nstp=[500, 500], - ) - bas = flopy.modflow.ModflowBas(ml, ibound=1, strt=1.0) - bcf = flopy.modflow.ModflowBcf( - ml, laycon=[0, 0, 0], tran=[40.0, 1, 80.0], vcont=[0.005, 0.005] - ) - wel = flopy.modflow.ModflowWel(ml, stress_period_data={0: lrcQ1, 1: lrcQ2}) - ghb = flopy.modflow.ModflowGhb(ml, stress_period_data={0: lrchc}) - swi = flopy.modflow.ModflowSwi2( - ml, - iswizt=55, - nsrf=1, - istrat=1, - toeslope=0.01, - tipslope=0.04, - nu=[0, 0.025], - zeta=[zini, zini, zini], - ssz=0.2, - isource=iso, - nsolver=1, - ) - oc = flopy.modflow.ModflowOc(ml, save_every=100, save_types=["save head"]) - pcg = flopy.modflow.ModflowPcg(ml) - # write the model files - ml.write_input() - # run the model - m = ml.run_model(silent=quiet) - - headfile = os.path.join(workspace, f"{modelname}.hds") - hdobj = flopy.utils.HeadFile(headfile) - head = hdobj.get_data(totim=3.65000e05) - - zetafile = os.path.join(workspace, f"{modelname}.zta") - zobj = flopy.utils.CellBudgetFile(zetafile) - zkstpkper = zobj.get_kstpkper() - zeta = [] - for kk in zkstpkper: - zeta.append(zobj.get_data(kstpkper=kk, text="ZETASRF 1")[0]) - zeta = np.array(zeta) - - fwid, fhgt = 7.00, 4.50 - flft, frgt, fbot, ftop = 0.125, 0.95, 0.125, 0.925 - - colormap = plt.cm.plasma # winter - cc = [] - icolor = 11 - cr = np.linspace(0.0, 0.9, icolor) - for idx in cr: - cc.append(colormap(idx)) - lw = 0.5 - - x = np.arange(-30 * delr + 0.5 * delr, (ncol - 30) * delr, delr) - xedge = np.linspace(-30.0 * delr, (ncol - 30.0) * delr, len(x) + 1) - zedge = [[-9.0, -29.0], [-29.0, -30.0], [-30.0, -50.0]] - - fig = plt.figure(figsize=(fwid, fhgt), facecolor="w") - fig.subplots_adjust( - wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop - ) - - ax = fig.add_subplot(311) - ax.text( - -0.075, - 1.05, - "A", - transform=ax.transAxes, - va="center", - ha="center", - size="8", - ) - # confining unit - ax.fill( - [-600, 3400, 3400, -600], - [-29, -29, -30, -30], - fc=[0.8, 0.8, 0.8], - ec=[0.8, 0.8, 0.8], - ) - # - z = np.copy(zini[0, :]) - zr = z.copy() - p = (zr < -9.0) & (zr > -50.0) - ax.plot(x[p], zr[p], color=cc[0], linewidth=lw, drawstyle="steps-mid") - # - for i in range(5): - zt = MergeData( - ncol, [zeta[i, 0, 0, :], zeta[i, 1, 0, :], zeta[i, 2, 0, :]], zedge - ) - dr = zt.copy() - ax.plot(x, dr, color=cc[i + 1], linewidth=lw, drawstyle="steps-mid") - # Manufacture a legend bar - LegBar(ax, -200.0, -33.75, 0, 25, -2.5, 200, cc[0:6]) - # axes - ax.set_ylim(-50, -9) - ax.set_ylabel("Elevation, in meters") - ax.set_xlim(-250.0, 2500.0) - - ax = fig.add_subplot(312) - ax.text( - -0.075, - 1.05, - "B", - transform=ax.transAxes, - va="center", - ha="center", - size="8", - ) - # confining unit - ax.fill( - [-600, 3400, 3400, -600], - [-29, -29, -30, -30], - fc=[0.8, 0.8, 0.8], - ec=[0.8, 0.8, 0.8], - ) - # - for i in range(4, 10): - zt = MergeData( - ncol, [zeta[i, 0, 0, :], zeta[i, 1, 0, :], zeta[i, 2, 0, :]], zedge - ) - dr = zt.copy() - ax.plot(x, dr, color=cc[i + 1], linewidth=lw, drawstyle="steps-mid") - # Manufacture a legend bar - LegBar(ax, -200.0, -33.75, 1000, 25, -2.5, 200, cc[5:11]) - # axes - ax.set_ylim(-50, -9) - ax.set_ylabel("Elevation, in meters") - ax.set_xlim(-250.0, 2500.0) - - ax = fig.add_subplot(313) - ax.text( - -0.075, - 1.05, - "C", - transform=ax.transAxes, - va="center", - ha="center", - size="8", - ) - # confining unit - ax.fill( - [-600, 3400, 3400, -600], - [-29, -29, -30, -30], - fc=[0.8, 0.8, 0.8], - ec=[0.8, 0.8, 0.8], - ) - # - zt = MergeData( - ncol, [zeta[4, 0, 0, :], zeta[4, 1, 0, :], zeta[4, 2, 0, :]], zedge - ) - ax.plot( - x, - zt, - marker="o", - markersize=3, - linewidth=0.0, - markeredgecolor="blue", - markerfacecolor="None", - ) - # ghyben herzberg - zeta1 = -9 - 40.0 * (head[0, 0, :]) - gbh = np.empty(len(zeta1), float) - gbho = np.empty(len(zeta1), float) - for idx, z1 in enumerate(zeta1): - if z1 >= -9.0 or z1 <= -50.0: - gbh[idx] = np.nan - gbho[idx] = 0.0 - else: - gbh[idx] = z1 - gbho[idx] = z1 - ax.plot(x, gbh, "r") - np.savetxt(os.path.join(workspace, "Ghyben-Herzberg.out"), gbho) - # fake figures - ax.plot([-100.0, -100], [-100.0, -100], "r", label="Ghyben-Herzberg") - ax.plot( - [-100.0, -100], - [-100.0, -100], - "bo", - markersize=3, - markeredgecolor="blue", - markerfacecolor="None", - label="SWI2", - ) - # legend - leg = ax.legend(loc="lower left", numpoints=1) - leg._drawFrame = False - # axes - ax.set_ylim(-50, -9) - ax.set_xlabel("Horizontal distance, in meters") - ax.set_ylabel("Elevation, in meters") - ax.set_xlim(-250.0, 2500.0) - - outfig = os.path.join(workspace, f"Figure08_swi2ex3.{fext}") - fig.savefig(outfig, dpi=300) - print("created...", outfig) - - -if __name__ == "__main__": - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("--keep", help="output directory") - parser.add_argument( - "--quiet", action="store_false", help="don't show model output" - ) - args = vars(parser.parse_args()) - - workspace = args.get("keep", None) - quiet = args.get("quiet", False) - - if workspace is not None: - run(workspace, quiet) - else: - try: - with TemporaryDirectory() as workspace: - run(workspace, quiet) - except (PermissionError, NotADirectoryError): - # can occur on windows: https://docs.python.org/3/library/tempfile.html#tempfile.TemporaryDirectory - pass diff --git a/examples/scripts/flopy_swi2_ex4.py b/examples/scripts/flopy_swi2_ex4.py deleted file mode 100644 index 7822337428..0000000000 --- a/examples/scripts/flopy_swi2_ex4.py +++ /dev/null @@ -1,604 +0,0 @@ -import os -import sys -from tempfile import TemporaryDirectory - -import matplotlib.pyplot as plt -import numpy as np - -import flopy - -# --modify default matplotlib settings -updates = { - "font.family": ["Univers 57 Condensed", "Arial"], - "mathtext.default": "regular", - "pdf.compression": 0, - "pdf.fonttype": 42, - "legend.fontsize": 7, - "axes.labelsize": 8, - "xtick.labelsize": 7, - "ytick.labelsize": 7, -} -plt.rcParams.update(updates) - - -def LegBar(ax, x0, y0, t0, dx, dy, dt, cc): - for c in cc: - ax.plot([x0, x0 + dx], [y0, y0], color=c, linewidth=4) - ctxt = f"{t0:=3d} years" - ax.text(x0 + 2.0 * dx, y0 + dy / 2.0, ctxt, size=5) - y0 += dy - t0 += dt - return - - -def run(workspace, quiet): - cleanFiles = False - skipRuns = False - fext = "png" - narg = len(sys.argv) - iarg = 0 - if narg > 1: - while iarg < narg - 1: - iarg += 1 - basearg = sys.argv[iarg].lower() - if basearg == "--clean": - cleanFiles = True - elif basearg == "--skipruns": - skipRuns = True - elif basearg == "--pdf": - fext = "pdf" - - if cleanFiles: - print("cleaning all files") - print("excluding *.py files") - files = os.listdir(workspace) - for f in files: - fpth = os.path.join(workspace, f) - if os.path.isdir(fpth): - continue - if ".py" != os.path.splitext(f)[1].lower(): - print(f" removing...{os.path.basename(f)}") - try: - os.remove(fpth) - except: - pass - return 0 - - # Set path and name of MODFLOW exe - exe_name = "mf2005" - - ncol = 61 - nrow = 61 - nlay = 2 - - nper = 3 - perlen = [365.25 * 200.0, 365.25 * 12.0, 365.25 * 18.0] - nstp = [1000, 120, 180] - save_head = [200, 60, 60] - steady = True - - # dis data - delr, delc = 50.0, 50.0 - botm = np.array([-10.0, -30.0, -50.0]) - - # oc data - ocspd = {} - ocspd[(0, 199)] = ["save head"] - ocspd[(0, 200)] = [] - ocspd[(0, 399)] = ["save head"] - ocspd[(0, 400)] = [] - ocspd[(0, 599)] = ["save head"] - ocspd[(0, 600)] = [] - ocspd[(0, 799)] = ["save head"] - ocspd[(0, 800)] = [] - ocspd[(0, 999)] = ["save head"] - ocspd[(1, 0)] = [] - ocspd[(1, 59)] = ["save head"] - ocspd[(1, 60)] = [] - ocspd[(1, 119)] = ["save head"] - ocspd[(2, 0)] = [] - ocspd[(2, 59)] = ["save head"] - ocspd[(2, 60)] = [] - ocspd[(2, 119)] = ["save head"] - - modelname = "swiex4_2d_2layer" - - # bas data - # ibound - active except for the corners - ibound = np.ones((nlay, nrow, ncol), dtype=int) - ibound[:, 0, 0] = 0 - ibound[:, 0, -1] = 0 - ibound[:, -1, 0] = 0 - ibound[:, -1, -1] = 0 - # initial head data - ihead = np.zeros((nlay, nrow, ncol), dtype=float) - - # lpf data - laytyp = 0 - hk = 10.0 - vka = 0.2 - - # boundary condition data - # ghb data - colcell, rowcell = np.meshgrid(np.arange(0, ncol), np.arange(0, nrow)) - index = np.zeros((nrow, ncol), dtype=int) - index[:, :10] = 1 - index[:, -10:] = 1 - index[:10, :] = 1 - index[-10:, :] = 1 - nghb = np.sum(index) - lrchc = np.zeros((nghb, 5)) - lrchc[:, 0] = 0 - lrchc[:, 1] = rowcell[index == 1] - lrchc[:, 2] = colcell[index == 1] - lrchc[:, 3] = 0.0 - lrchc[:, 4] = 50.0 * 50.0 / 40.0 - # create ghb dictionary - ghb_data = {0: lrchc} - - # recharge data - rch = np.zeros((nrow, ncol), dtype=float) - rch[index == 0] = 0.0004 - # create recharge dictionary - rch_data = {0: rch} - - # well data - nwells = 2 - lrcq = np.zeros((nwells, 4)) - lrcq[0, :] = np.array((0, 30, 35, 0)) - lrcq[1, :] = np.array([1, 30, 35, 0]) - lrcqw = lrcq.copy() - lrcqw[0, 3] = -250 - lrcqsw = lrcq.copy() - lrcqsw[0, 3] = -250.0 - lrcqsw[1, 3] = -25.0 - # create well dictionary - base_well_data = {0: lrcq, 1: lrcqw} - swwells_well_data = {0: lrcq, 1: lrcqw, 2: lrcqsw} - - # swi2 data - adaptive = False - nadptmx = 10 - nadptmn = 1 - nu = [0, 0.025] - numult = 5.0 - toeslope = nu[1] / numult # 0.005 - tipslope = nu[1] / numult # 0.005 - z1 = -10.0 * np.ones((nrow, ncol)) - z1[index == 0] = -11.0 - z = np.array([[z1, z1]]) - iso = np.zeros((nlay, nrow, ncol), int) - iso[0, :, :][index == 0] = 1 - iso[0, :, :][index == 1] = -2 - iso[1, 30, 35] = 2 - ssz = 0.2 - # swi2 observations - obsnam = ["layer1_", "layer2_"] - obslrc = [[0, 30, 35], [1, 30, 35]] - nobs = len(obsnam) - iswiobs = 1051 - - modelname = "swiex4_s1" - if not skipRuns: - ml = flopy.modflow.Modflow( - modelname, version="mf2005", exe_name=exe_name, model_ws=workspace - ) - - discret = flopy.modflow.ModflowDis( - ml, - nlay=nlay, - nrow=nrow, - ncol=ncol, - laycbd=0, - delr=delr, - delc=delc, - top=botm[0], - botm=botm[1:], - nper=nper, - perlen=perlen, - nstp=nstp, - steady=steady, - ) - bas = flopy.modflow.ModflowBas(ml, ibound=ibound, strt=ihead) - lpf = flopy.modflow.ModflowLpf(ml, laytyp=laytyp, hk=hk, vka=vka) - wel = flopy.modflow.ModflowWel(ml, stress_period_data=base_well_data) - ghb = flopy.modflow.ModflowGhb(ml, stress_period_data=ghb_data) - rch = flopy.modflow.ModflowRch(ml, rech=rch_data) - swi = flopy.modflow.ModflowSwi2( - ml, - iswizt=55, - nsrf=1, - istrat=1, - toeslope=toeslope, - tipslope=tipslope, - nu=nu, - zeta=z, - ssz=ssz, - isource=iso, - nsolver=1, - nadptmx=nadptmx, - nadptmn=nadptmn, - nobs=nobs, - iswiobs=iswiobs, - obsnam=obsnam, - obslrc=obslrc, - ) - oc = flopy.modflow.ModflowOc(ml, stress_period_data=ocspd) - pcg = flopy.modflow.ModflowPcg( - ml, hclose=1.0e-6, rclose=3.0e-3, mxiter=100, iter1=50 - ) - # create model files - ml.write_input() - # run the model - m = ml.run_model(silent=quiet) - - # model with saltwater wells - modelname2 = "swiex4_s2" - if not skipRuns: - ml2 = flopy.modflow.Modflow( - modelname2, version="mf2005", exe_name=exe_name, model_ws=workspace - ) - - discret = flopy.modflow.ModflowDis( - ml2, - nlay=nlay, - nrow=nrow, - ncol=ncol, - laycbd=0, - delr=delr, - delc=delc, - top=botm[0], - botm=botm[1:], - nper=nper, - perlen=perlen, - nstp=nstp, - steady=steady, - ) - bas = flopy.modflow.ModflowBas(ml2, ibound=ibound, strt=ihead) - lpf = flopy.modflow.ModflowLpf(ml2, laytyp=laytyp, hk=hk, vka=vka) - wel = flopy.modflow.ModflowWel( - ml2, stress_period_data=swwells_well_data - ) - ghb = flopy.modflow.ModflowGhb(ml2, stress_period_data=ghb_data) - rch = flopy.modflow.ModflowRch(ml2, rech=rch_data) - swi = flopy.modflow.ModflowSwi2( - ml2, - iswizt=55, - nsrf=1, - istrat=1, - toeslope=toeslope, - tipslope=tipslope, - nu=nu, - zeta=z, - ssz=ssz, - isource=iso, - nsolver=1, - nadptmx=nadptmx, - nadptmn=nadptmn, - nobs=nobs, - iswiobs=iswiobs, - obsnam=obsnam, - obslrc=obslrc, - ) - oc = flopy.modflow.ModflowOc(ml2, stress_period_data=ocspd) - pcg = flopy.modflow.ModflowPcg( - ml2, hclose=1.0e-6, rclose=3.0e-3, mxiter=100, iter1=50 - ) - # create model files - ml2.write_input() - # run the model - m = ml2.run_model(silent=quiet) - - # Load the simulation 1 `ZETA` data and `ZETA` observations. - # read base model zeta - zfile = flopy.utils.CellBudgetFile( - os.path.join(workspace, f"{modelname}.zta") - ) - kstpkper = zfile.get_kstpkper() - zeta = [] - for kk in kstpkper: - zeta.append(zfile.get_data(kstpkper=kk, text="ZETASRF 1")[0]) - zeta = np.array(zeta) - # read swi obs - zobs = np.genfromtxt( - os.path.join(workspace, f"{modelname}.zobs.out"), names=True - ) - - # Load the simulation 2 `ZETA` data and `ZETA` observations. - # read saltwater well model zeta - zfile2 = flopy.utils.CellBudgetFile( - os.path.join(workspace, f"{modelname2}.zta") - ) - kstpkper = zfile2.get_kstpkper() - zeta2 = [] - for kk in kstpkper: - zeta2.append(zfile2.get_data(kstpkper=kk, text="ZETASRF 1")[0]) - zeta2 = np.array(zeta2) - # read swi obs - zobs2 = np.genfromtxt( - os.path.join(workspace, f"{modelname2}.zobs.out"), names=True - ) - - # Create arrays for the x-coordinates and the output years - - x = np.linspace(-1500, 1500, 61) - xcell = np.linspace(-1500, 1500, 61) + delr / 2.0 - xedge = np.linspace(-1525, 1525, 62) - years = [40, 80, 120, 160, 200, 6, 12, 18, 24, 30] - - # Define figure dimensions and colors used for plotting `ZETA` surfaces - - # figure dimensions - fwid, fhgt = 8.00, 5.50 - flft, frgt, fbot, ftop = 0.125, 0.95, 0.125, 0.925 - - # line color definition - icolor = 5 - colormap = plt.cm.jet # winter - cc = [] - cr = np.linspace(0.9, 0.0, icolor) - for idx in cr: - cc.append(colormap(idx)) - - # Recreate **Figure 9** from the SWI2 documentation (https://pubs.usgs.gov/tm/6a46/). - - plt.rcParams.update({"legend.fontsize": 6, "legend.frameon": False}) - fig = plt.figure(figsize=(fwid, fhgt), facecolor="w") - fig.subplots_adjust( - wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop - ) - # first plot - ax = fig.add_subplot(2, 2, 1) - # axes limits - ax.set_xlim(-1500, 1500) - ax.set_ylim(-50, -10) - for idx in range(5): - # layer 1 - ax.plot( - xcell, - zeta[idx, 0, 30, :], - drawstyle="steps-mid", - linewidth=0.5, - color=cc[idx], - label=f"{years[idx]:2d} years", - ) - # layer 2 - ax.plot( - xcell, - zeta[idx, 1, 30, :], - drawstyle="steps-mid", - linewidth=0.5, - color=cc[idx], - label="_None", - ) - ax.plot([-1500, 1500], [-30, -30], color="k", linewidth=1.0) - # legend - plt.legend(loc="lower left") - # axes labels and text - ax.set_xlabel("Horizontal distance, in meters") - ax.set_ylabel("Elevation, in meters") - ax.text( - 0.025, - 0.55, - "Layer 1", - transform=ax.transAxes, - va="center", - ha="left", - size="7", - ) - ax.text( - 0.025, - 0.45, - "Layer 2", - transform=ax.transAxes, - va="center", - ha="left", - size="7", - ) - ax.text( - 0.975, - 0.1, - "Recharge conditions", - transform=ax.transAxes, - va="center", - ha="right", - size="8", - ) - - # second plot - ax = fig.add_subplot(2, 2, 2) - # axes limits - ax.set_xlim(-1500, 1500) - ax.set_ylim(-50, -10) - for idx in range(5, len(years) - 1): - # layer 1 - ax.plot( - xcell, - zeta[idx, 0, 30, :], - drawstyle="steps-mid", - linewidth=0.5, - color=cc[idx - 5], - label=f"{years[idx]:2d} years", - ) - # layer 2 - ax.plot( - xcell, - zeta[idx, 1, 30, :], - drawstyle="steps-mid", - linewidth=0.5, - color=cc[idx - 5], - label="_None", - ) - ax.plot([-1500, 1500], [-30, -30], color="k", linewidth=1.0) - # legend - plt.legend(loc="lower left") - # axes labels and text - ax.set_xlabel("Horizontal distance, in meters") - ax.set_ylabel("Elevation, in meters") - ax.text( - 0.025, - 0.55, - "Layer 1", - transform=ax.transAxes, - va="center", - ha="left", - size="7", - ) - ax.text( - 0.025, - 0.45, - "Layer 2", - transform=ax.transAxes, - va="center", - ha="left", - size="7", - ) - ax.text( - 0.975, - 0.1, - "Freshwater well withdrawal", - transform=ax.transAxes, - va="center", - ha="right", - size="8", - ) - - # third plot - ax = fig.add_subplot(2, 2, 3) - # axes limits - ax.set_xlim(-1500, 1500) - ax.set_ylim(-50, -10) - for idx in range(5, len(years) - 1): - # layer 1 - ax.plot( - xcell, - zeta2[idx, 0, 30, :], - drawstyle="steps-mid", - linewidth=0.5, - color=cc[idx - 5], - label=f"{years[idx]:2d} years", - ) - # layer 2 - ax.plot( - xcell, - zeta2[idx, 1, 30, :], - drawstyle="steps-mid", - linewidth=0.5, - color=cc[idx - 5], - label="_None", - ) - ax.plot([-1500, 1500], [-30, -30], color="k", linewidth=1.0) - # legend - plt.legend(loc="lower left") - # axes labels and text - ax.set_xlabel("Horizontal distance, in meters") - ax.set_ylabel("Elevation, in meters") - ax.text( - 0.025, - 0.55, - "Layer 1", - transform=ax.transAxes, - va="center", - ha="left", - size="7", - ) - ax.text( - 0.025, - 0.45, - "Layer 2", - transform=ax.transAxes, - va="center", - ha="left", - size="7", - ) - ax.text( - 0.975, - 0.1, - "Freshwater and saltwater\nwell withdrawals", - transform=ax.transAxes, - va="center", - ha="right", - size="8", - ) - - # fourth plot - ax = fig.add_subplot(2, 2, 4) - # axes limits - ax.set_xlim(0, 30) - ax.set_ylim(-50, -10) - t = zobs["TOTIM"][999:] / 365 - 200.0 - tz2 = zobs["layer1_001"][999:] - tz3 = zobs2["layer1_001"][999:] - for i in range(len(t)): - if zobs["layer2_001"][i + 999] < -30.0 - 0.1: - tz2[i] = zobs["layer2_001"][i + 999] - if zobs2["layer2_001"][i + 999] < 20.0 - 0.1: - tz3[i] = zobs2["layer2_001"][i + 999] - ax.plot( - t, - tz2, - linestyle="solid", - color="r", - linewidth=0.75, - label="Freshwater well", - ) - ax.plot( - t, - tz3, - linestyle="dotted", - color="r", - linewidth=0.75, - label="Freshwater and saltwater well", - ) - ax.plot([0, 30], [-30, -30], "k", linewidth=1.0, label="_None") - # legend - leg = plt.legend(loc="lower right", numpoints=1) - # axes labels and text - ax.set_xlabel("Time, in years") - ax.set_ylabel("Elevation, in meters") - ax.text( - 0.025, - 0.55, - "Layer 1", - transform=ax.transAxes, - va="center", - ha="left", - size="7", - ) - ax.text( - 0.025, - 0.45, - "Layer 2", - transform=ax.transAxes, - va="center", - ha="left", - size="7", - ) - - outfig = os.path.join(workspace, f"Figure09_swi2ex4.{fext}") - fig.savefig(outfig, dpi=300) - print("created...", outfig) - - -if __name__ == "__main__": - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("--keep", help="output directory") - parser.add_argument( - "--quiet", action="store_false", help="don't show model output" - ) - args = vars(parser.parse_args()) - - workspace = args.get("keep", None) - quiet = args.get("quiet", False) - - if workspace is not None: - run(workspace, quiet) - else: - try: - with TemporaryDirectory() as workspace: - run(workspace, quiet) - except (PermissionError, NotADirectoryError): - # can occur on windows: https://docs.python.org/3/library/tempfile.html#tempfile.TemporaryDirectory - pass diff --git a/examples/scripts/flopy_swi2_ex5.py b/examples/scripts/flopy_swi2_ex5.py deleted file mode 100644 index 7a9a50a385..0000000000 --- a/examples/scripts/flopy_swi2_ex5.py +++ /dev/null @@ -1,647 +0,0 @@ -import math -import os -import sys -from tempfile import TemporaryDirectory - -import matplotlib.pyplot as plt -import numpy as np - -import flopy - -# --modify default matplotlib settings -updates = { - "font.family": ["Univers 57 Condensed", "Arial"], - "mathtext.default": "regular", - "pdf.compression": 0, - "pdf.fonttype": 42, - "legend.fontsize": 7, - "axes.labelsize": 8, - "xtick.labelsize": 7, - "ytick.labelsize": 7, -} -plt.rcParams.update(updates) - - -def run(workspace, quiet): - cleanFiles = False - skipRuns = False - fext = "png" - narg = len(sys.argv) - iarg = 0 - if narg > 1: - while iarg < narg - 1: - iarg += 1 - basearg = sys.argv[iarg].lower() - if basearg == "--clean": - cleanFiles = True - elif basearg == "--skipruns": - skipRuns = True - elif basearg == "--pdf": - fext = "pdf" - - dirs = [os.path.join(workspace, "SWI2"), os.path.join(workspace, "SEAWAT")] - - if cleanFiles: - print("cleaning all files") - print("excluding *.py files") - file_dict = {} - file_dict[0] = os.listdir(dirs[0]) - file_dict[1] = os.listdir(dirs[1]) - file_dict[-1] = os.listdir(workspace) - for key, files in list(file_dict.items()): - pth = "." - if key >= 0: - pth = dirs[key] - for f in files: - fpth = os.path.join(pth, f) - if os.path.isdir(fpth): - continue - if ".py" != os.path.splitext(f)[1].lower(): - print(f" removing...{os.path.basename(f)}") - try: - os.remove(fpth) - except: - pass - for d in dirs: - if os.path.exists(d): - os.rmdir(d) - return 0 - - # --make working directories - for d in dirs: - if not os.path.exists(d): - os.mkdir(d) - - # --problem data - nlay = 6 - nrow = 1 - ncol = 113 - delr = np.zeros((ncol), float) - delc = 1.0 - r = np.zeros((ncol), float) - x = np.zeros((ncol), float) - edge = np.zeros((ncol), float) - dx = 25.0 - for i in range(0, ncol): - delr[i] = dx - r[0] = delr[0] / 2.0 - for i in range(1, ncol): - r[i] = r[i - 1] + (delr[i - 1] + delr[i]) / 2.0 - x[0] = delr[0] / 2.0 - for i in range(1, ncol): - x[i] = x[i - 1] + (delr[i - 1] + delr[i]) / 2.0 - edge[0] = delr[0] - for i in range(1, ncol): - edge[i] = edge[i - 1] + delr[i] - - # constant data for all simulations - nper = 2 - perlen = [1460, 1460] - nstp = [1460, 1460] - steady = True - - nsave_zeta = 8 - ndecay = 4 - ibound = np.ones((nlay, nrow, ncol), int) - for k in range(0, nlay): - ibound[k, 0, ncol - 1] = -1 - bot = np.zeros((nlay, nrow, ncol), float) - dz = 100.0 / float(nlay - 1) - zall = -np.arange(0, 100 + dz, dz) - zall = np.append(zall, -120.0) - tb = -np.arange(dz, 100 + dz, dz) - tb = np.append(tb, -120.0) - for k in range(0, nlay): - for i in range(0, ncol): - bot[k, 0, i] = tb[k] - isource = np.zeros((nlay, nrow, ncol), int) - isource[:, 0, ncol - 1] = 1 - isource[nlay - 1, 0, ncol - 1] = 2 - - khb = (0.0000000000256 * 1000.0 * 9.81 / 0.001) * 60 * 60 * 24 - kvb = (0.0000000000100 * 1000.0 * 9.81 / 0.001) * 60 * 60 * 24 - ssb = 1e-5 - sszb = 0.2 - kh = np.zeros((nlay, nrow, ncol), float) - kv = np.zeros((nlay, nrow, ncol), float) - ss = np.zeros((nlay, nrow, ncol), float) - ssz = np.zeros((nlay, nrow, ncol), float) - for k in range(0, nlay): - for i in range(0, ncol): - f = r[i] * 2.0 * math.pi - kh[k, 0, i] = khb * f - kv[k, 0, i] = kvb * f - ss[k, 0, i] = ssb * f - ssz[k, 0, i] = sszb * f - z = np.ones((nlay), float) - z = -100.0 * z - - nwell = 1 - for k in range(0, nlay): - if zall[k] > -20.0 and zall[k + 1] <= -20: - nwell = k + 1 - print(f"nlay={nlay} dz={dz} nwell={nwell}") - wellQ = -2400.0 - wellbtm = -20.0 - wellQpm = wellQ / abs(wellbtm) - well_data = {} - for ip in range(0, nper): - welllist = np.zeros((nwell, 4), float) - for iw in range(0, nwell): - if ip == 0: - b = zall[iw] - zall[iw + 1] - if zall[iw + 1] < wellbtm: - b = zall[iw] - wellbtm - q = wellQpm * b - else: - q = 0.0 - welllist[iw, 0] = iw - welllist[iw, 1] = 0 - welllist[iw, 2] = 0 - welllist[iw, 3] = q - well_data[ip] = welllist.copy() - - ihead = np.zeros((nlay), float) - - ocspd = {} - for i in range(0, nper): - icnt = 0 - for j in range(0, nstp[i]): - icnt += 1 - if icnt == 365: - ocspd[(i, j)] = ["save head"] - icnt = 0 - else: - ocspd[(i, j)] = [] - - solver2params = { - "mxiter": 100, - "iter1": 20, - "npcond": 1, - "zclose": 1.0e-6, - "rclose": 3e-3, - "relax": 1.0, - "nbpol": 2, - "damp": 1.0, - "dampt": 1.0, - } - - # --create model file and run model - modelname = "swi2ex5" - mf_name = "mf2005" - if not skipRuns: - ml = flopy.modflow.Modflow( - modelname, version="mf2005", exe_name=mf_name, model_ws=dirs[0] - ) - discret = flopy.modflow.ModflowDis( - ml, - nrow=nrow, - ncol=ncol, - nlay=nlay, - delr=delr, - delc=delc, - top=0, - botm=bot, - laycbd=0, - nper=nper, - perlen=perlen, - nstp=nstp, - steady=steady, - ) - bas = flopy.modflow.ModflowBas(ml, ibound=ibound, strt=ihead) - lpf = flopy.modflow.ModflowLpf( - ml, hk=kh, vka=kv, ss=ss, sy=ssz, vkcb=0, laytyp=0, layavg=1 - ) - wel = flopy.modflow.ModflowWel(ml, stress_period_data=well_data) - swi = flopy.modflow.ModflowSwi2( - ml, - iswizt=55, - nsrf=1, - istrat=1, - toeslope=0.025, - tipslope=0.025, - nu=[0, 0.025], - zeta=z, - ssz=ssz, - isource=isource, - nsolver=2, - solver2params=solver2params, - ) - oc = flopy.modflow.ModflowOc(ml, stress_period_data=ocspd) - pcg = flopy.modflow.ModflowPcg( - ml, hclose=1.0e-6, rclose=3.0e-3, mxiter=100, iter1=50 - ) - # --write the modflow files - ml.write_input() - m = ml.run_model(silent=quiet) - - # --read model zeta - get_stp = [364, 729, 1094, 1459, 364, 729, 1094, 1459] - get_per = [0, 0, 0, 0, 1, 1, 1, 1] - nswi_times = len(get_per) - zetafile = os.path.join(dirs[0], f"{modelname}.zta") - zobj = flopy.utils.CellBudgetFile(zetafile) - zeta = [] - for kk in zip(get_stp, get_per): - zeta.append(zobj.get_data(kstpkper=kk, text="ZETASRF 1")[0]) - zeta = np.array(zeta) - - # --seawat input - redefine input data that differ from SWI2 - nlay_swt = 120 - # --mt3d print times - timprs = (np.arange(8) + 1) * 365.0 - nprs = len(timprs) - # -- - ndecay = 4 - ibound = np.ones((nlay_swt, nrow, ncol), "int") - for k in range(0, nlay_swt): - ibound[k, 0, ncol - 1] = -1 - bot = np.zeros((nlay_swt, nrow, ncol), float) - zall = [0, -20.0, -40.0, -60.0, -80.0, -100.0, -120.0] - dz = 120.0 / nlay_swt - tb = np.arange(nlay_swt) * -dz - dz - sconc = np.zeros((nlay_swt, nrow, ncol), float) - icbund = np.ones((nlay_swt, nrow, ncol), int) - strt = np.zeros((nlay_swt, nrow, ncol), float) - pressure = 0.0 - g = 9.81 - z = -dz / 2.0 # cell center - for k in range(0, nlay_swt): - for i in range(0, ncol): - bot[k, 0, i] = tb[k] - if bot[k, 0, 0] >= -100.0: - sconc[k, 0, :] = 0.0 / 3.0 * 0.025 * 1000.0 / 0.7143 - else: - sconc[k, 0, :] = 3.0 / 3.0 * 0.025 * 1000.0 / 0.7143 - icbund[k, 0, -1] = -1 - - dense = 1000.0 + 0.7143 * sconc[k, 0, 0] - pressure += 0.5 * dz * dense * g - if k > 0: - z = z - dz - denseup = 1000.0 + 0.7143 * sconc[k - 1, 0, 0] - pressure += 0.5 * dz * denseup * g - strt[k, 0, :] = z + pressure / dense / g - # print z, pressure, strt[k, 0, 0], sconc[k, 0, 0] - - khb = (0.0000000000256 * 1000.0 * 9.81 / 0.001) * 60 * 60 * 24 - kvb = (0.0000000000100 * 1000.0 * 9.81 / 0.001) * 60 * 60 * 24 - ssb = 1e-5 - sszb = 0.2 - kh = np.zeros((nlay_swt, nrow, ncol), float) - kv = np.zeros((nlay_swt, nrow, ncol), float) - ss = np.zeros((nlay_swt, nrow, ncol), float) - ssz = np.zeros((nlay_swt, nrow, ncol), float) - for k in range(0, nlay_swt): - for i in range(0, ncol): - f = r[i] * 2.0 * math.pi - kh[k, 0, i] = khb * f - kv[k, 0, i] = kvb * f - ss[k, 0, i] = ssb * f - ssz[k, 0, i] = sszb * f - # wells and ssm data - itype = flopy.mt3d.Mt3dSsm.itype_dict() - nwell = 1 - for k in range(0, nlay_swt): - if bot[k, 0, 0] >= -20.0: - nwell = k + 1 - print(f"nlay_swt={nlay_swt} dz={dz} nwell={nwell}") - well_data = {} - ssm_data = {} - wellQ = -2400.0 - wellbtm = -20.0 - wellQpm = wellQ / abs(wellbtm) - for ip in range(0, nper): - welllist = np.zeros((nwell, 4), float) - ssmlist = [] - for iw in range(0, nwell): - if ip == 0: - q = wellQpm * dz - else: - q = 0.0 - welllist[iw, 0] = iw - welllist[iw, 1] = 0 - welllist[iw, 2] = 0 - welllist[iw, 3] = q - ssmlist.append([iw, 0, 0, 0.0, itype["WEL"]]) - well_data[ip] = welllist.copy() - ssm_data[ip] = ssmlist - - # Define model name for SEAWAT model - modelname = "swi2ex5_swt" - swtexe_name = "swtv4" - # Create the MODFLOW model data - if not skipRuns: - m = flopy.seawat.Seawat( - modelname, exe_name=swtexe_name, model_ws=dirs[1] - ) - discret = flopy.modflow.ModflowDis( - m, - nrow=nrow, - ncol=ncol, - nlay=nlay_swt, - delr=delr, - delc=delc, - top=0, - botm=bot, - laycbd=0, - nper=nper, - perlen=perlen, - nstp=nstp, - steady=True, - ) - bas = flopy.modflow.ModflowBas(m, ibound=ibound, strt=strt) - lpf = flopy.modflow.ModflowLpf( - m, hk=kh, vka=kv, ss=ss, sy=ssz, vkcb=0, laytyp=0, layavg=1 - ) - wel = flopy.modflow.ModflowWel(m, stress_period_data=well_data) - oc = flopy.modflow.ModflowOc( - m, save_every=365, save_types=["save head"] - ) - pcg = flopy.modflow.ModflowPcg( - m, hclose=1.0e-5, rclose=3.0e-3, mxiter=100, iter1=50 - ) - # Create the basic MT3DMS model data - adv = flopy.mt3d.Mt3dAdv( - m, - mixelm=-1, - percel=0.5, - nadvfd=0, - # 0 or 1 is upstream; 2 is central in space - # particle based methods - nplane=4, - mxpart=1e7, - itrack=2, - dceps=1e-4, - npl=16, - nph=16, - npmin=8, - npmax=256, - ) - btn = flopy.mt3d.Mt3dBtn( - m, - icbund=icbund, - prsity=ssz, - ncomp=1, - sconc=sconc, - ifmtcn=-1, - chkmas=False, - nprobs=10, - nprmas=10, - dt0=1.0, - ttsmult=1.0, - nprs=nprs, - timprs=timprs, - mxstrn=1e8, - ) - dsp = flopy.mt3d.Mt3dDsp(m, al=0.0, trpt=1.0, trpv=1.0, dmcoef=0.0) - gcg = flopy.mt3d.Mt3dGcg(m, mxiter=1, iter1=50, isolve=1, cclose=1e-7) - ssm = flopy.mt3d.Mt3dSsm(m, stress_period_data=ssm_data) - # Create the SEAWAT model data - vdf = flopy.seawat.SeawatVdf( - m, - iwtable=0, - densemin=0, - densemax=0, - denseref=1000.0, - denseslp=0.7143, - firstdt=1e-3, - ) - # write seawat files - m.write_input() - - # Run SEAWAT - m = m.run_model(silent=quiet) - - # plot the results - # read seawat model data - ucnfile = os.path.join(dirs[1], "MT3D001.UCN") - uobj = flopy.utils.UcnFile(ucnfile) - times = uobj.get_times() - print(times) - conc = np.zeros((len(times), nlay_swt, ncol), float) - for idx, tt in enumerate(times): - c = uobj.get_data(totim=tt) - for ilay in range(0, nlay_swt): - for jcol in range(0, ncol): - conc[idx, ilay, jcol] = c[ilay, 0, jcol] - - # spatial data - # swi2 - bot = np.zeros((1, ncol, nlay), float) - dz = 100.0 / float(nlay - 1) - zall = -np.arange(0, 100 + dz, dz) - zall = np.append(zall, -120.0) - tb = -np.arange(dz, 100 + dz, dz) - tb = np.append(tb, -120.0) - for k in range(0, nlay): - for i in range(0, ncol): - bot[0, i, k] = tb[k] - # seawat - swt_dz = 120.0 / nlay_swt - swt_tb = np.zeros((nlay_swt), float) - zc = -swt_dz / 2.0 - for klay in range(0, nlay_swt): - swt_tb[klay] = zc - zc -= swt_dz - X, Z = np.meshgrid(x, swt_tb) - - # Make figure - fwid, fhgt = 6.5, 6.5 - flft, frgt, fbot, ftop = 0.125, 0.95, 0.125, 0.925 - - eps = 1.0e-3 - - lc = ["r", "c", "g", "b", "k"] - cfig = ["A", "B", "C", "D", "E", "F", "G", "H"] - inc = 1.0e-3 - - xsf, axes = plt.subplots(4, 2, figsize=(fwid, fhgt), facecolor="w") - xsf.subplots_adjust( - wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop - ) - # withdrawal and recovery titles - ax = axes.flatten()[0] - ax.text( - 0.0, - 1.03, - "Withdrawal", - transform=ax.transAxes, - va="bottom", - ha="left", - size="8", - ) - ax = axes.flatten()[1] - ax.text( - 0.0, - 1.03, - "Recovery", - transform=ax.transAxes, - va="bottom", - ha="left", - size="8", - ) - # dummy items for legend - ax = axes.flatten()[2] - ax.plot( - [-1, -1], - [-1, -1], - "bo", - markersize=3, - markeredgecolor="blue", - markerfacecolor="None", - label="SWI2 interface", - ) - ax.plot( - [-1, -1], - [-1, -1], - color="k", - linewidth=0.75, - linestyle="solid", - label="SEAWAT 50% seawater", - ) - ax.plot( - [-1, -1], - [-1, -1], - marker="s", - color="k", - linewidth=0, - linestyle="none", - markeredgecolor="w", - markerfacecolor="0.75", - label="SEAWAT 5-95% seawater", - ) - leg = ax.legend( - loc="upper left", - numpoints=1, - ncol=1, - labelspacing=0.5, - borderaxespad=1, - handlelength=3, - ) - leg._drawFrame = False - # data items - for itime in range(0, nswi_times): - zb = np.zeros((ncol), float) - zs = np.zeros((ncol), float) - for icol in range(0, ncol): - for klay in range(0, nlay): - # top and bottom of layer - ztop = float(f"{zall[klay]:10.3e}") - zbot = float(f"{zall[klay + 1]:10.3e}") - # fresh-salt zeta surface - zt = zeta[itime, klay, 0, icol] - if (ztop - zt) > eps: - zs[icol] = zt - if itime < ndecay: - ic = itime - isp = ic * 2 - ax = axes.flatten()[isp] - else: - ic = itime - ndecay - isp = (ic * 2) + 1 - ax = axes.flatten()[isp] - # figure title - ax.text( - -0.15, - 1.025, - cfig[itime], - transform=ax.transAxes, - va="center", - ha="center", - size="8", - ) - - # swi2 - ax.plot( - x, - zs, - "bo", - markersize=3, - markeredgecolor="blue", - markerfacecolor="None", - label="_None", - ) - - # seawat - sc = ax.contour( - X, - Z, - conc[itime, :, :], - levels=[17.5], - colors="k", - linestyles="solid", - linewidths=0.75, - zorder=30, - ) - cc = ax.contourf( - X, - Z, - conc[itime, :, :], - levels=[0.0, 1.75, 33.250], - colors=["w", "0.75", "w"], - ) - # set graph limits - ax.set_xlim(0, 500) - ax.set_ylim(-100, -65) - if itime < ndecay: - ax.set_ylabel("Elevation, in meters") - - # x labels - ax = axes.flatten()[6] - ax.set_xlabel("Horizontal distance, in meters") - ax = axes.flatten()[7] - ax.set_xlabel("Horizontal distance, in meters") - - # simulation time titles - for itime in range(0, nswi_times): - if itime < ndecay: - ic = itime - isp = ic * 2 - ax = axes.flatten()[isp] - else: - ic = itime - ndecay - isp = (ic * 2) + 1 - ax = axes.flatten()[isp] - iyr = itime + 1 - if iyr > 1: - ctxt = f"{iyr} years" - else: - ctxt = f"{iyr} year" - ax.text( - 0.95, - 0.925, - ctxt, - transform=ax.transAxes, - va="top", - ha="right", - size="8", - ) - - outfig = os.path.join(workspace, f"Figure11_swi2ex5.{fext}") - xsf.savefig(outfig, dpi=300) - print("created...", outfig) - - -if __name__ == "__main__": - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("--keep", help="output directory") - parser.add_argument( - "--quiet", action="store_false", help="don't show model output" - ) - args = vars(parser.parse_args()) - - workspace = args.get("keep", None) - quiet = args.get("quiet", False) - - if workspace is not None: - run(workspace, quiet) - else: - try: - with TemporaryDirectory() as workspace: - run(workspace, quiet) - except (PermissionError, NotADirectoryError): - # can occur on windows: https://docs.python.org/3/library/tempfile.html#tempfile.TemporaryDirectory - pass From b0a9481db2dbdc854d3334771b549d3b72712d23 Mon Sep 17 00:00:00 2001 From: wpbonelli Date: Sat, 23 Sep 2023 19:13:37 -0400 Subject: [PATCH 059/101] chore: format tests, colorize pytest CI output (#1961) --- .github/workflows/benchmark.yml | 4 +-- .github/workflows/commit.yml | 6 ++-- .github/workflows/examples.yml | 4 +-- .github/workflows/mf6.yml | 4 +-- .github/workflows/optional.yml | 2 +- .github/workflows/regression.yml | 4 +-- .github/workflows/release.yml | 2 +- .github/workflows/rtd.yml | 2 +- autotest/test_model_splitter.py | 60 ++++++++++++-------------------- autotest/test_modflow.py | 2 +- 10 files changed, 38 insertions(+), 52 deletions(-) diff --git a/.github/workflows/benchmark.yml b/.github/workflows/benchmark.yml index e72415c424..eba412d900 100644 --- a/.github/workflows/benchmark.yml +++ b/.github/workflows/benchmark.yml @@ -69,7 +69,7 @@ jobs: working-directory: ./autotest run: | mkdir -p .benchmarks - pytest -v --durations=0 --benchmark-only --benchmark-json .benchmarks/${{ matrix.os }}_python${{ matrix.python-version }}.json --keep-failed=.failed + pytest -v --color=yes --durations=0 --benchmark-only --benchmark-json .benchmarks/${{ matrix.os }}_python${{ matrix.python-version }}.json --keep-failed=.failed env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} @@ -79,7 +79,7 @@ jobs: working-directory: ./autotest run: | mkdir -p .benchmarks - pytest -v --durations=0 --benchmark-only --benchmark-json .benchmarks/${{ matrix.os }}_python${{ matrix.python-version }}.json --keep-failed=.failed + pytest -v --color=yes --durations=0 --benchmark-only --benchmark-json .benchmarks/${{ matrix.os }}_python${{ matrix.python-version }}.json --keep-failed=.failed env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/commit.yml b/.github/workflows/commit.yml index 5e5163f5ce..dad0c098cf 100644 --- a/.github/workflows/commit.yml +++ b/.github/workflows/commit.yml @@ -131,7 +131,7 @@ jobs: - name: Smoke test working-directory: autotest - run: pytest -v -n=auto --smoke --cov=flopy --cov-report=xml --durations=0 --keep-failed=.failed + run: pytest -v -n=auto --smoke --color=yes --cov=flopy --cov-report=xml --durations=0 --keep-failed=.failed env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} @@ -226,7 +226,7 @@ jobs: if: runner.os != 'Windows' working-directory: ./autotest run: | - pytest -v -m="not example and not regression" -n=auto --cov=flopy --cov-append --cov-report=xml --durations=0 --keep-failed=.failed --dist loadfile + pytest -v -m="not example and not regression" -n=auto --color=yes --cov=flopy --cov-append --cov-report=xml --durations=0 --keep-failed=.failed --dist loadfile coverage report env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} @@ -236,7 +236,7 @@ jobs: shell: bash -l {0} working-directory: ./autotest run: | - pytest -v -m="not example and not regression" -n=auto --cov=flopy --cov-append --cov-report=xml --durations=0 --keep-failed=.failed --dist loadfile + pytest -v -m="not example and not regression" -n=auto --color=yes --cov=flopy --cov-append --cov-report=xml --durations=0 --keep-failed=.failed --dist loadfile coverage report env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/examples.yml b/.github/workflows/examples.yml index fe9b014b3b..e0b4fb061c 100644 --- a/.github/workflows/examples.yml +++ b/.github/workflows/examples.yml @@ -97,7 +97,7 @@ jobs: if: runner.os != 'Windows' working-directory: ./autotest run: | - pytest -v -m="example" -n=auto -s --durations=0 --keep-failed=.failed + pytest -v -m="example" -n=auto -s --color=yes --durations=0 --keep-failed=.failed env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} @@ -106,7 +106,7 @@ jobs: shell: bash -l {0} working-directory: ./autotest run: | - pytest -v -m="example" -n=auto -s --durations=0 --keep-failed=.failed + pytest -v -m="example" -n=auto -s --color=yes --durations=0 --keep-failed=.failed env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/mf6.yml b/.github/workflows/mf6.yml index 584e4e80ad..445eebf180 100644 --- a/.github/workflows/mf6.yml +++ b/.github/workflows/mf6.yml @@ -82,12 +82,12 @@ jobs: env: GITHUB_TOKEN: ${{ github.token }} run: | - pytest -v --durations=0 get_exes.py + pytest -v --color=yes --durations=0 get_exes.py - name: Run tests working-directory: modflow6/autotest run: | - pytest -v -n auto -k "test_gw" --durations=0 --cov=flopy --cov-report=xml + pytest -v --color=yes --cov=flopy --cov-report=xml --durations=0 -k "test_gw" -n auto - name: Print coverage report before upload working-directory: ./modflow6/autotest diff --git a/.github/workflows/optional.yml b/.github/workflows/optional.yml index ee29136542..c3081a7ab1 100644 --- a/.github/workflows/optional.yml +++ b/.github/workflows/optional.yml @@ -62,7 +62,7 @@ jobs: - name: Smoke test (${{ matrix.optdeps }}) working-directory: autotest - run: pytest -v -n=auto --smoke --cov=flopy --cov-report=xml --durations=0 --keep-failed=.failed + run: pytest -v -n=auto --smoke --color=yes --cov=flopy --cov-report=xml --durations=0 --keep-failed=.failed env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/regression.yml b/.github/workflows/regression.yml index 972f1e1650..097ff9efa0 100644 --- a/.github/workflows/regression.yml +++ b/.github/workflows/regression.yml @@ -80,7 +80,7 @@ jobs: - name: Run regression tests if: runner.os != 'Windows' working-directory: ./autotest - run: pytest -v -m="regression" -n=auto --durations=0 --keep-failed=.failed + run: pytest -v -m="regression" -n=auto --color=yes --durations=0 --keep-failed=.failed env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} @@ -88,7 +88,7 @@ jobs: if: runner.os == 'Windows' shell: bash -l {0} working-directory: ./autotest - run: pytest -v -m="regression" -n=auto --durations=0 --keep-failed=.failed + run: pytest -v -m="regression" -n=auto --color=yes --durations=0 --keep-failed=.failed env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml index e130f3a337..bcd0af801d 100644 --- a/.github/workflows/release.yml +++ b/.github/workflows/release.yml @@ -71,7 +71,7 @@ jobs: - name: Run tests working-directory: autotest - run: pytest -v -m="not example and not regression" -n=auto --durations=0 --keep-failed=.failed --dist loadfile + run: pytest -v -m="not example and not regression" -n=auto --color=yes --durations=0 --keep-failed=.failed --dist loadfile env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/rtd.yml b/.github/workflows/rtd.yml index b092e4b16e..76d4f1b43b 100644 --- a/.github/workflows/rtd.yml +++ b/.github/workflows/rtd.yml @@ -79,7 +79,7 @@ jobs: - name: Run tutorial and example notebooks working-directory: autotest - run: pytest -v -n auto test_notebooks.py + run: pytest -v -n auto --color=yes test_notebooks.py - name: Upload notebooks artifact for ReadtheDocs if: github.repository_owner == 'modflowpy' && github.event_name == 'push' && runner.os == 'Linux' diff --git a/autotest/test_model_splitter.py b/autotest/test_model_splitter.py index 921c29dc4a..e743399d91 100644 --- a/autotest/test_model_splitter.py +++ b/autotest/test_model_splitter.py @@ -1,9 +1,9 @@ import numpy as np -import flopy import pytest from autotest.conftest import get_example_data_path from modflow_devtools.markers import requires_exe, requires_pkg +import flopy from flopy.mf6 import MFSimulation from flopy.mf6.utils import Mf6Splitter @@ -260,11 +260,7 @@ def test_control_records(function_tmpdir): tdis = flopy.mf6.ModflowTdis( sim, nper=nper, - perioddata=( - (1., 1, 1.), - (1., 1, 1.), - (1., 1, 1.) - ) + perioddata=((1.0, 1, 1.0), (1.0, 1, 1.0), (1.0, 1, 1.0)), ) gwf = flopy.mf6.ModflowGwf(sim, save_flows=True) @@ -279,20 +275,20 @@ def test_control_records(function_tmpdir): delc=1, top=35, botm=[30, botm2], - idomain=1 + idomain=1, ) ic = flopy.mf6.ModflowGwfic(gwf, strt=32) npf = flopy.mf6.ModflowGwfnpf( gwf, k=[ - 1., + 1.0, { "data": np.ones((10, 10)) * 0.75, "filename": "k.l2.txt", "iprn": 1, - "factor": 1 - } + "factor": 1, + }, ], k33=[ np.ones((nrow, ncol)), @@ -301,41 +297,29 @@ def test_control_records(function_tmpdir): "filename": "k33.l2.bin", "iprn": 1, "factor": 1, - "binary": True - } - ] - + "binary": True, + }, + ], ) - wel_rec = [((0, 4, 5), -10), ] + wel_rec = [ + ((0, 4, 5), -10), + ] spd = { 0: wel_rec, - 1: { - "data": wel_rec, - "filename": "wel.1.txt" - }, - 2: { - "data": wel_rec, - "filename": "wel.2.bin", - "binary": True - } + 1: {"data": wel_rec, "filename": "wel.1.txt"}, + 2: {"data": wel_rec, "filename": "wel.2.bin", "binary": True}, } - wel = flopy.mf6.ModflowGwfwel( - gwf, - stress_period_data=spd - ) + wel = flopy.mf6.ModflowGwfwel(gwf, stress_period_data=spd) chd_rec = [] for cond, j in ((30, 0), (22, 9)): for i in range(10): chd_rec.append(((0, i, j), cond)) - chd = flopy.mf6.ModflowGwfchd( - gwf, - stress_period_data={0: chd_rec} - ) + chd = flopy.mf6.ModflowGwfchd(gwf, stress_period_data={0: chd_rec}) arr = np.zeros((10, 10), dtype=int) arr[0:5, :] = 1 @@ -354,14 +338,18 @@ def test_control_records(function_tmpdir): "External ascii files not being preserved for MFArray" ) - k33ls = ml1.npf.k33._data_storage.layer_storage.multi_dim_list + k33ls = ml1.npf.k33._data_storage.layer_storage.multi_dim_list if k33ls[1].data_storage_type.value != 3 or not k33ls[1].binary: raise AssertionError( "Binary file input not being preserved for MFArray" ) - spd_ls1 = ml1.wel.stress_period_data._data_storage[1].layer_storage.multi_dim_list[0] - spd_ls2 = ml1.wel.stress_period_data._data_storage[2].layer_storage.multi_dim_list[0] + spd_ls1 = ml1.wel.stress_period_data._data_storage[ + 1 + ].layer_storage.multi_dim_list[0] + spd_ls2 = ml1.wel.stress_period_data._data_storage[ + 2 + ].layer_storage.multi_dim_list[0] if spd_ls1.data_storage_type.value != 3 or spd_ls1.binary: raise AssertionError( @@ -372,5 +360,3 @@ def test_control_records(function_tmpdir): raise AssertionError( "External binary file input not being preseved for MFList" ) - - diff --git a/autotest/test_modflow.py b/autotest/test_modflow.py index d19a255d37..fd34810f00 100644 --- a/autotest/test_modflow.py +++ b/autotest/test_modflow.py @@ -7,8 +7,8 @@ import numpy as np import pytest from autotest.conftest import get_example_data_path -from modflow_devtools.misc import has_pkg from modflow_devtools.markers import excludes_platform, requires_exe +from modflow_devtools.misc import has_pkg from flopy.discretization import StructuredGrid from flopy.mf6 import MFSimulation From 0d17ae5a30df032412cfbe73986d567a7c66e19b Mon Sep 17 00:00:00 2001 From: wpbonelli Date: Sat, 23 Sep 2023 19:15:08 -0400 Subject: [PATCH 060/101] docs(code.rst): add model splitter to API docs (#1962) --- .docs/code.rst | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/.docs/code.rst b/.docs/code.rst index d8aa1f4de1..3bfeb040a6 100644 --- a/.docs/code.rst +++ b/.docs/code.rst @@ -184,8 +184,7 @@ Contents: MODFLOW 6 Utility Functions ^^^^^^^^^^^^^^^^^^^^^^^^^^^ -MODFLOW 6 has several utility functions that can be used to process MODFLOW 6 -results. +MODFLOW 6 has a number of utilities useful for pre- or postprocessing. Contents: @@ -199,6 +198,7 @@ Contents: ./source/flopy.mf6.utils.postprocessing.rst ./source/flopy.mf6.utils.reference.rst ./source/flopy.mf6.utils.lakpak_utils.rst + ./source/flopy.mf6.utils.model_splitter.rst MODFLOW 6 Data From d7d7f6649118a84d869522b52b20a4af8a397ebe Mon Sep 17 00:00:00 2001 From: wpbonelli Date: Sun, 24 Sep 2023 13:21:19 -0400 Subject: [PATCH 061/101] fix(resolve_exe): support extensionless abs/rel paths on windows (#1957) --- autotest/test_mbase.py | 33 ++++++++++++++------------- autotest/test_modflow.py | 12 +++++----- flopy/mbase.py | 48 +++++++++++++++++++++++++--------------- 3 files changed, 52 insertions(+), 41 deletions(-) diff --git a/autotest/test_mbase.py b/autotest/test_mbase.py index 757909f960..8a2eddcadc 100644 --- a/autotest/test_mbase.py +++ b/autotest/test_mbase.py @@ -11,6 +11,9 @@ from flopy.utils.flopy_io import relpath_safe +_system = system() + + @pytest.fixture def mf6_model_path(example_data_path): return example_data_path / "mf6" / "test006_gwf3" @@ -18,10 +21,7 @@ def mf6_model_path(example_data_path): @requires_exe("mf6") @pytest.mark.parametrize("use_ext", [True, False]) -def test_resolve_exe_by_name(function_tmpdir, use_ext): - if use_ext and system() != "Windows": - pytest.skip(".exe extensions are Windows-only") - +def test_resolve_exe_by_name(use_ext): ext = ".exe" if use_ext else "" expected = which("mf6").lower() actual = resolve_exe(f"mf6{ext}") @@ -31,13 +31,14 @@ def test_resolve_exe_by_name(function_tmpdir, use_ext): @requires_exe("mf6") @pytest.mark.parametrize("use_ext", [True, False]) -def test_resolve_exe_by_abs_path(function_tmpdir, use_ext): - if use_ext and system() != "Windows": - pytest.skip(".exe extensions are Windows-only") - - ext = ".exe" if use_ext else "" +def test_resolve_exe_by_abs_path(use_ext): + abs_path = which("mf6") + if _system == "Windows" and not use_ext: + abs_path = abs_path[:-4] + elif _system != "Windows" and use_ext: + abs_path = f"{abs_path}.exe" expected = which("mf6").lower() - actual = resolve_exe(which(f"mf6{ext}")) + actual = resolve_exe(abs_path) assert actual.lower() == expected assert which(actual) @@ -46,9 +47,6 @@ def test_resolve_exe_by_abs_path(function_tmpdir, use_ext): @pytest.mark.parametrize("use_ext", [True, False]) @pytest.mark.parametrize("forgive", [True, False]) def test_resolve_exe_by_rel_path(function_tmpdir, use_ext, forgive): - if use_ext and system() != "Windows": - pytest.skip(".exe extensions are Windows-only") - ext = ".exe" if use_ext else "" expected = which("mf6").lower() @@ -59,10 +57,11 @@ def test_resolve_exe_by_rel_path(function_tmpdir, use_ext, forgive): with set_dir(inner_dir): # copy exe to relative dir - copy(expected, bin_dir / "mf6") - assert (bin_dir / "mf6").is_file() + new_exe_path = bin_dir / Path(expected).name + copy(expected, new_exe_path) + assert new_exe_path.is_file() - expected = which(str(Path(bin_dir / "mf6").absolute())).lower() + expected = which(str(new_exe_path.absolute())).lower() actual = resolve_exe(f"../bin/mf6{ext}") assert actual.lower() == expected assert which(actual) @@ -77,7 +76,7 @@ def test_resolve_exe_by_rel_path(function_tmpdir, use_ext, forgive): def test_run_model_when_namefile_not_in_model_ws( - mf6_model_path, example_data_path, function_tmpdir + mf6_model_path, function_tmpdir ): # copy input files to temp workspace ws = function_tmpdir / "ws" diff --git a/autotest/test_modflow.py b/autotest/test_modflow.py index fd34810f00..6f628aad68 100644 --- a/autotest/test_modflow.py +++ b/autotest/test_modflow.py @@ -268,10 +268,10 @@ def test_exe_selection(example_data_path, function_tmpdir): # no selection defaults to mf2005 exe_name = "mf2005" - assert Path(Modflow().exe_name).name == exe_name - assert Path(Modflow(exe_name=None).exe_name).name == exe_name + assert Path(Modflow().exe_name).stem == exe_name + assert Path(Modflow(exe_name=None).exe_name).stem == exe_name assert ( - Path(Modflow.load(namfile_path, model_ws=model_path).exe_name).name + Path(Modflow.load(namfile_path, model_ws=model_path).exe_name).stem == exe_name ) assert ( @@ -279,20 +279,20 @@ def test_exe_selection(example_data_path, function_tmpdir): Modflow.load( namfile_path, exe_name=None, model_ws=model_path ).exe_name - ).name + ).stem == exe_name ) # user-specified (just for testing - there is no legitimate reason # to use mp7 with Modflow but Modpath7 derives from BaseModel too) exe_name = "mp7" - assert Path(Modflow(exe_name=exe_name).exe_name).name == exe_name + assert Path(Modflow(exe_name=exe_name).exe_name).stem == exe_name assert ( Path( Modflow.load( namfile_path, exe_name=exe_name, model_ws=model_path ).exe_name - ).name + ).stem == exe_name ) diff --git a/flopy/mbase.py b/flopy/mbase.py index 6f1cd660c3..af711c5a2f 100644 --- a/flopy/mbase.py +++ b/flopy/mbase.py @@ -26,8 +26,10 @@ from .utils import flopy_io from .version import __version__ +on_windows = sys.platform.startswith("win") + # Prepend flopy appdir bin directory to PATH to work with "get-modflow :flopy" -if sys.platform.startswith("win"): +if on_windows: flopy_bin = os.path.expandvars(r"%LOCALAPPDATA%\flopy\bin") else: flopy_bin = os.path.join(os.path.expanduser("~"), ".local/share/flopy/bin") @@ -62,25 +64,34 @@ def resolve_exe( str: absolute path to the executable """ - exe_name = str(exe_name) - exe = which(exe_name) - if exe is not None: - # in case which() returned a relative path, resolve it - exe = which(str(Path(exe).resolve())) - else: - if exe_name.lower().endswith(".exe"): - # try removing .exe suffix - exe = which(exe_name[:-4]) + def _resolve(exe_name): + exe = which(exe_name) if exe is not None: - # in case which() returned a relative path, resolve it + # if which() returned a relative path, resolve it exe = which(str(Path(exe).resolve())) else: - # try tilde-expanded abspath - exe = which(Path(exe_name).expanduser().absolute()) - if exe is None and exe_name.lower().endswith(".exe"): - # try tilde-expanded abspath without .exe suffix - exe = which(Path(exe_name[:-4]).expanduser().absolute()) - if exe is None: + if exe_name.lower().endswith(".exe"): + # try removing .exe suffix + exe = which(exe_name[:-4]) + if exe is not None: + # in case which() returned a relative path, resolve it + exe = which(str(Path(exe).resolve())) + else: + # try tilde-expanded abspath + exe = which(Path(exe_name).expanduser().absolute()) + if exe is None and exe_name.lower().endswith(".exe"): + # try tilde-expanded abspath without .exe suffix + exe = which(Path(exe_name[:-4]).expanduser().absolute()) + return exe + + name = str(exe_name) + exe_path = _resolve(name) + if exe_path is None and on_windows and Path(name).suffix == "": + # try adding .exe suffix on windows (for portability from other OS) + exe_path = _resolve(f"{name}.exe") + + # raise if we are unforgiving, otherwise return None + if exe_path is None: if forgive: warn( f"The program {exe_name} does not exist or is not executable." @@ -90,7 +101,8 @@ def resolve_exe( raise FileNotFoundError( f"The program {exe_name} does not exist or is not executable." ) - return exe + + return exe_path # external exceptions for users From 5b5eb4ec20c833df4117415d433513bf3a1b4aa5 Mon Sep 17 00:00:00 2001 From: Eric Morway Date: Mon, 25 Sep 2023 10:37:42 -0700 Subject: [PATCH 062/101] fix(model_splitter.py): standardizing naming of iuzno, rno, lakeno, & wellno to ifno (#1963) * fix(model_splitter.py): standardizing use of 'iuzno', 'rno', 'lakeno', & 'wellno' to 'ifno' * run black --- flopy/mf6/utils/model_splitter.py | 173 +++++++++++++++--------------- 1 file changed, 85 insertions(+), 88 deletions(-) diff --git a/flopy/mf6/utils/model_splitter.py b/flopy/mf6/utils/model_splitter.py index 1c75c6c39c..3f36bc53b0 100644 --- a/flopy/mf6/utils/model_splitter.py +++ b/flopy/mf6/utils/model_splitter.py @@ -57,37 +57,37 @@ "riv": "cellid", "wel": "cellid", "lak": { - "stage": "lakeno", - "ext-inflow": "lakeno", - "outlet-inflow": "lakeno", - "inflow": "lakeno", - "from-mvr": "lakeno", - "rainfall": "lakeno", - "runoff": "lakeno", - "lak": "lakeno", - "withdrawal": "lakeno", - "evaporation": "lakeno", - "ext-outflow": "lakeno", + "stage": "ifno", + "ext-inflow": "ifno", + "outlet-inflow": "ifno", + "inflow": "ifno", + "from-mvr": "ifno", + "rainfall": "ifno", + "runoff": "ifno", + "lak": "ifno", + "withdrawal": "ifno", + "evaporation": "ifno", + "ext-outflow": "ifno", "to-mvr": "outletno", - "storage": "lakeno", - "constant": "lakeno", + "storage": "ifno", + "constant": "ifno", "outlet": "outletno", - "volume": "lakeno", - "surface-area": "lakeno", - "wetted-area": "lakeno", - "conductance": "lakeno", + "volume": "ifno", + "surface-area": "ifno", + "wetted-area": "ifno", + "conductance": "ifno", }, - "maw": "wellno", - "sfr": "rno", - "uzf": "iuzno", + "maw": "ifno", + "sfr": "ifno", + "uzf": "ifno", # transport "gwt": "cellid", "cnc": "cellid", "src": "cellid", - "sft": "rno", - "lkt": "lakeno", + "sft": "ifno", + "lkt": "ifno", "mwt": "mawno", - "uzt": "uztno", + "uzt": "ifno", } OBS_ID2_LUT = { @@ -107,13 +107,13 @@ "gwt": "cellid", "cnc": None, "src": None, - "sft": "rno", + "sft": "ifno", "lkt": { - "flow-ja-face": "lakeno", + "flow-ja-face": "ifno", "lkt": None, }, "mwt": None, - "uzt": "uztno", + "uzt": "ifno", } @@ -395,7 +395,7 @@ def optimize_splitting_mask(self, nparts): nodes = [i[0] for i in cellids] if isinstance(package, modflow.ModflowGwflak): - lakenos = package.connectiondata.array.lakeno + 1 + lakenos = package.connectiondata.array.ifno + 1 lak_array[nodes] = lakenos laks += [i for i in np.unique(lakenos)] else: @@ -1338,7 +1338,7 @@ def _remap_uzf(self, package, mapped_data): ------- dict """ - obs_map = {"iuzno": {}} + obs_map = {"ifno": {}} if isinstance(package, modflow.ModflowGwfuzf): packagedata = package.packagedata.array perioddata = package.perioddata.data @@ -1358,7 +1358,7 @@ def _remap_uzf(self, package, mapped_data): if new_recarray is not None: uzf_remap = { - i: ix for ix, i in enumerate(new_recarray.iuzno) + i: ix for ix, i in enumerate(new_recarray.ifno) } if "boundname" in new_recarray.dtype.names: for bname in new_recarray.boundname: @@ -1369,14 +1369,14 @@ def _remap_uzf(self, package, mapped_data): for oid, nid in uzf_remap.items(): mvr_remap[oid] = (model.name, nid) self._uzf_remaps[name][oid] = (mkey, nid) - obs_map["iuzno"][oid] = (mkey, nid) + obs_map["ifno"][oid] = (mkey, nid) new_cellids = self._new_node_to_cellid( model, new_node, layers, idx ) new_recarray["cellid"] = new_cellids - new_recarray["iuzno"] = [ - uzf_remap[i] for i in new_recarray["iuzno"] + new_recarray["ifno"] = [ + uzf_remap[i] for i in new_recarray["ifno"] ] new_recarray["ivertcon"] = [ uzf_remap[i] for i in new_recarray["ivertcon"] @@ -1388,10 +1388,10 @@ def _remap_uzf(self, package, mapped_data): spd = {} for per, recarray in perioddata.items(): - idx = np.where(np.isin(recarray.iuzno, uzf_nodes)) + idx = np.where(np.isin(recarray.ifno, uzf_nodes)) new_period = recarray[idx] - new_period["iuzno"] = [ - uzf_remap[i] for i in new_period["iuzno"] + new_period["ifno"] = [ + uzf_remap[i] for i in new_period["ifno"] ] spd[per] = new_period @@ -1413,14 +1413,14 @@ def _remap_uzf(self, package, mapped_data): name = package.flow_package_name.array uzf_remap = self._uzf_remaps[name] mapped_data = self._remap_adv_transport( - package, "iuzno", uzf_remap, mapped_data + package, "ifno", uzf_remap, mapped_data ) for obspak in package.obs._packages: mapped_data = self._remap_obs( obspak, mapped_data, - obs_map["iuzno"], + obs_map["ifno"], pkg_type=package.package_type, ) @@ -1519,7 +1519,7 @@ def _remap_lak(self, package, mapped_data): packagedata = package.packagedata.array perioddata = package.perioddata.data - obs_map = {"lakeno": {}, "outletno": {}} + obs_map = {"ifno": {}, "outletno": {}} if isinstance(package, modflow.ModflowGwflak): connectiondata = package.connectiondata.array tables = package.tables.array @@ -1547,23 +1547,23 @@ def _remap_lak(self, package, mapped_data): new_recarray["cellid"] = new_cellids for nlak, lak in enumerate( - sorted(np.unique(new_recarray.lakeno)) + sorted(np.unique(new_recarray.ifno)) ): lak_remaps[lak] = (mkey, nlak) self._lak_remaps[name][lak] = (mkey, nlak) - obs_map["lakeno"][lak] = (mkey, nlak) + obs_map["ifno"][lak] = (mkey, nlak) - new_lak = [lak_remaps[i][-1] for i in new_recarray.lakeno] - new_recarray["lakeno"] = new_lak + new_lak = [lak_remaps[i][-1] for i in new_recarray.ifno] + new_recarray["ifno"] = new_lak new_packagedata = self._remap_adv_tag( - mkey, packagedata, "lakeno", lak_remaps + mkey, packagedata, "ifno", lak_remaps ) new_tables = None if tables is not None: new_tables = self._remap_adv_tag( - mkey, tables, "lakeno", lak_remaps + mkey, tables, "ifno", lak_remaps ) new_outlets = None @@ -1613,9 +1613,7 @@ def _remap_lak(self, package, mapped_data): mapped_data[mkey]["tables"] = new_tables mapped_data[mkey]["outlets"] = new_outlets mapped_data[mkey]["perioddata"] = spd - mapped_data[mkey]["nlakes"] = len( - new_packagedata.lakeno - ) + mapped_data[mkey]["nlakes"] = len(new_packagedata.ifno) if new_outlets is not None: mapped_data[mkey]["noutlets"] = len(new_outlets) if new_tables is not None: @@ -1651,7 +1649,7 @@ def _remap_sfr(self, package, mapped_data): ------- dict """ - obs_map = {"rno": {}} + obs_map = {"ifno": {}} if isinstance(package, modflow.ModflowGwfsfr): packagedata = package.packagedata.array crosssections = package.crosssections.array @@ -1684,21 +1682,21 @@ def _remap_sfr(self, package, mapped_data): new_rno = [] old_rno = [] - for ix, rno in enumerate(new_recarray.rno): + for ix, ifno in enumerate(new_recarray.ifno): new_rno.append(ix) - old_rno.append(rno) - sfr_remaps[rno] = (mkey, ix) - sfr_remaps[-1 * rno] = (mkey, -1 * ix) - self._sfr_remaps[name][rno] = (mkey, ix) - obs_map["rno"][rno] = (mkey, ix) + old_rno.append(ifno) + sfr_remaps[ifno] = (mkey, ix) + sfr_remaps[-1 * ifno] = (mkey, -1 * ix) + self._sfr_remaps[name][ifno] = (mkey, ix) + obs_map["ifno"][ifno] = (mkey, ix) - new_recarray["rno"] = new_rno + new_recarray["ifno"] = new_rno obs_map = self._set_boundname_remaps( - new_recarray, obs_map, ["rno"], mkey + new_recarray, obs_map, ["ifno"], mkey ) # now let's remap connection data and tag external exchanges - idx = np.where(np.isin(connectiondata.rno, old_rno))[0] + idx = np.where(np.isin(connectiondata.ifno, old_rno))[0] new_connectiondata = connectiondata[idx] ncons = [] for ix, rec in enumerate(new_connectiondata): @@ -1720,11 +1718,11 @@ def _remap_sfr(self, package, mapped_data): if rec[item] < 0: # downstream connection sfr_mvr_conn.append( - (rec["rno"], int(abs(rec[item]))) + (rec["ifno"], int(abs(rec[item]))) ) else: sfr_mvr_conn.append( - (int(rec[item]), rec["rno"]) + (int(rec[item]), rec["ifno"]) ) # sort the new_rec so nan is last ncons.append(len(new_rec) - 1) @@ -1740,7 +1738,7 @@ def _remap_sfr(self, package, mapped_data): new_crosssections = None if crosssections is not None: new_crosssections = self._remap_adv_tag( - mkey, crosssections, "rno", sfr_remaps + mkey, crosssections, "ifno", sfr_remaps ) new_diversions = None @@ -1748,40 +1746,42 @@ def _remap_sfr(self, package, mapped_data): if diversions is not None: # first check if diversion outlet is outside the model for ix, rec in enumerate(diversions): - rno = rec.rno + ifno = rec.ifno iconr = rec.iconr if ( - rno not in sfr_remaps + ifno not in sfr_remaps and iconr not in sfr_remaps ): continue - elif rno in sfr_remaps and iconr not in sfr_remaps: + elif ( + ifno in sfr_remaps and iconr not in sfr_remaps + ): div_mover_ix.append(ix) else: - m0 = sfr_remaps[rno][0] + m0 = sfr_remaps[ifno][0] m1 = sfr_remaps[iconr][0] if m0 != m1: div_mover_ix.append(ix) - idx = np.where(np.isin(diversions.rno, old_rno))[0] + idx = np.where(np.isin(diversions.ifno, old_rno))[0] idx = np.where(~np.isin(idx, div_mover_ix))[0] new_diversions = diversions[idx] new_rno = [ - sfr_remaps[i][-1] for i in new_diversions.rno + sfr_remaps[i][-1] for i in new_diversions.ifno ] new_iconr = [ sfr_remaps[i][-1] for i in new_diversions.iconr ] new_idv = list(range(len(new_diversions))) - new_diversions["rno"] = new_rno + new_diversions["ifno"] = new_rno new_diversions["iconr"] = new_iconr new_diversions["idv"] = new_idv externals = diversions[div_mover_ix] for rec in externals: div_mvr_conn[rec["idv"]] = [ - rec["rno"], + rec["ifno"], rec["iconr"], rec["cprior"], ] @@ -1789,7 +1789,7 @@ def _remap_sfr(self, package, mapped_data): # now we can do the stress period data spd = {} for kper, recarray in perioddata.items(): - idx = np.where(np.isin(recarray.rno, old_rno))[0] + idx = np.where(np.isin(recarray.ifno, old_rno))[0] new_spd = recarray[idx] if diversions is not None: external_divs = np.where( @@ -1810,8 +1810,8 @@ def _remap_sfr(self, package, mapped_data): new_spd = new_spd[idx] # now to renamp the rnos... - new_rno = [sfr_remaps[i][-1] for i in new_spd.rno] - new_spd["rno"] = new_rno + new_rno = [sfr_remaps[i][-1] for i in new_spd.ifno] + new_spd["ifno"] = new_rno spd[kper] = new_spd mapped_data[mkey]["packagedata"] = new_recarray @@ -1874,14 +1874,14 @@ def _remap_sfr(self, package, mapped_data): flow_package_name = package.flow_package_name.array sfr_remap = self._sfr_remaps[flow_package_name] mapped_data = self._remap_adv_transport( - package, "rno", sfr_remap, mapped_data + package, "ifno", sfr_remap, mapped_data ) for obspak in package.obs._packages: mapped_data = self._remap_obs( obspak, mapped_data, - obs_map["rno"], + obs_map["ifno"], pkg_type=package.package_type, ) @@ -1901,7 +1901,7 @@ def _remap_maw(self, package, mapped_data): ------- dict """ - obs_map = {"wellno": {}} + obs_map = {"ifno": {}} if isinstance(package, modflow.ModflowGwfmaw): connectiondata = package.connectiondata.array packagedata = package.packagedata.array @@ -1926,39 +1926,36 @@ def _remap_maw(self, package, mapped_data): maw_wellnos = [] for nmaw, maw in enumerate( - sorted(np.unique(new_connectiondata.wellno)) + sorted(np.unique(new_connectiondata.ifno)) ): maw_wellnos.append(maw) maw_remaps[maw] = (mkey, nmaw) self._maw_remaps[maw] = (mkey, nmaw) - obs_map["wellno"][maw] = (mkey, nmaw) + obs_map["ifno"][maw] = (mkey, nmaw) new_wellno = [ - maw_remaps[wl][-1] for wl in new_connectiondata.wellno + maw_remaps[wl][-1] for wl in new_connectiondata.ifno ] new_connectiondata["cellid"] = new_cellids - new_connectiondata["wellno"] = new_wellno + new_connectiondata["ifno"] = new_wellno obs_map = self._set_boundname_remaps( - new_connectiondata, obs_map, ["wellno"], mkey + new_connectiondata, obs_map, ["ifno"], mkey ) new_packagedata = self._remap_adv_tag( - mkey, packagedata, "wellno", maw_remaps + mkey, packagedata, "ifno", maw_remaps ) spd = {} for per, recarray in perioddata.items(): - idx = np.where(np.isin(recarray.wellno, maw_wellnos))[ - 0 - ] + idx = np.where(np.isin(recarray.ifno, maw_wellnos))[0] if len(idx) > 0: new_recarray = recarray[idx] new_wellno = [ - maw_remaps[wl][-1] - for wl in new_recarray.wellno + maw_remaps[wl][-1] for wl in new_recarray.ifno ] - new_recarray["wellno"] = new_wellno + new_recarray["ifno"] = new_wellno spd[per] = new_recarray mapped_data[mkey]["nmawwells"] = len(new_packagedata) @@ -1979,7 +1976,7 @@ def _remap_maw(self, package, mapped_data): mapped_data = self._remap_obs( obspak, mapped_data, - obs_map["wellno"], + obs_map["ifno"], pkg_type=package.package_type, ) @@ -2640,7 +2637,7 @@ def _new_node_to_cellid(self, model, new_node, layers, idx): def _remap_adv_tag(self, mkey, recarray, item, mapper): """ - Method to remap advanced package ids such as SFR's rno varaible + Method to remap advanced package ids such as SFR's ifno varaible Parameters ---------- From a3fb59b364fb51e8ef5295f8df3b2f849e6ac7d4 Mon Sep 17 00:00:00 2001 From: Eric Morway Date: Tue, 26 Sep 2023 10:00:01 -0700 Subject: [PATCH 063/101] docs(mf6io): update .dfn files corresponding to similar change in mf6 (#1956) * docs(mf6io): update .dfn files corresponding to parallel change in mf6 * completes the standardization of the advanced package feature number naming * updating classes * forgot to run isort * undo modifcation of timestamps for classes not updated by this PR --- flopy/mf6/data/dfn/gwf-lak.dfn | 26 ++--- flopy/mf6/data/dfn/gwf-maw.dfn | 24 ++-- flopy/mf6/data/dfn/gwf-sfr.dfn | 48 ++++---- flopy/mf6/data/dfn/gwf-uzf.dfn | 12 +- flopy/mf6/data/dfn/gwt-lkt.dfn | 12 +- flopy/mf6/data/dfn/gwt-mwt.dfn | 12 +- flopy/mf6/data/dfn/gwt-sft.dfn | 12 +- flopy/mf6/data/dfn/gwt-uzt.dfn | 12 +- flopy/mf6/modflow/__init__.py | 1 + flopy/mf6/modflow/mfgwflak.py | 65 ++++++----- flopy/mf6/modflow/mfgwfmaw.py | 68 +++++------ flopy/mf6/modflow/mfgwfsfr.py | 182 ++++++++++++++++-------------- flopy/mf6/modflow/mfgwfuzf.py | 36 +++--- flopy/mf6/modflow/mfgwtlkt.py | 24 ++-- flopy/mf6/modflow/mfgwtmwt.py | 34 +++--- flopy/mf6/modflow/mfgwtsft.py | 48 ++++---- flopy/mf6/modflow/mfgwtuzt.py | 34 +++--- flopy/mf6/modflow/mfsimulation.py | 11 +- 18 files changed, 342 insertions(+), 319 deletions(-) diff --git a/flopy/mf6/data/dfn/gwf-lak.dfn b/flopy/mf6/data/dfn/gwf-lak.dfn index 36a67c16dc..d73c4d13b7 100644 --- a/flopy/mf6/data/dfn/gwf-lak.dfn +++ b/flopy/mf6/data/dfn/gwf-lak.dfn @@ -349,21 +349,21 @@ description value specifying the number of lakes tables that will be used to def block packagedata name packagedata -type recarray lakeno strt nlakeconn aux boundname +type recarray ifno strt nlakeconn aux boundname shape (maxbound) reader urword longname description block packagedata -name lakeno +name ifno type integer shape tagged false in_record true reader urword longname lake number for this entry -description integer value that defines the lake number associated with the specified PACKAGEDATA data on the line. LAKENO must be greater than zero and less than or equal to NLAKES. Lake information must be specified for every lake or the program will terminate with an error. The program will also terminate with an error if information for a lake is specified more than once. +description integer value that defines the feature (lake) number associated with the specified PACKAGEDATA data on the line. IFNO must be greater than zero and less than or equal to NLAKES. Lake information must be specified for every lake or the program will terminate with an error. The program will also terminate with an error if information for a lake is specified more than once. numeric_index true block packagedata @@ -384,7 +384,7 @@ tagged false in_record true reader urword longname number of lake connections -description integer value that defines the number of GWF cells connected to this (LAKENO) lake. There can only be one vertical lake connection to each GWF cell. NLAKECONN must be greater than zero. +description integer value that defines the number of GWF cells connected to this (IFNO) lake. There can only be one vertical lake connection to each GWF cell. NLAKECONN must be greater than zero. block packagedata name aux @@ -414,21 +414,21 @@ description REPLACE boundname {'{#1}': 'lake'} block connectiondata name connectiondata -type recarray lakeno iconn cellid claktype bedleak belev telev connlen connwidth +type recarray ifno iconn cellid claktype bedleak belev telev connlen connwidth shape (sum(nlakeconn)) reader urword longname description block connectiondata -name lakeno +name ifno type integer shape tagged false in_record true reader urword longname lake number for this entry -description integer value that defines the lake number associated with the specified CONNECTIONDATA data on the line. LAKENO must be greater than zero and less than or equal to NLAKES. Lake connection information must be specified for every lake connection to the GWF model (NLAKECONN) or the program will terminate with an error. The program will also terminate with an error if connection information for a lake connection to the GWF model is specified more than once. +description integer value that defines the feature (lake) number associated with the specified CONNECTIONDATA data on the line. IFNO must be greater than zero and less than or equal to NLAKES. Lake connection information must be specified for every lake connection to the GWF model (NLAKECONN) or the program will terminate with an error. The program will also terminate with an error if connection information for a lake connection to the GWF model is specified more than once. numeric_index true block connectiondata @@ -439,7 +439,7 @@ tagged false in_record true reader urword longname connection number for this entry -description integer value that defines the GWF connection number for this lake connection entry. ICONN must be greater than zero and less than or equal to NLAKECONN for lake LAKENO. +description integer value that defines the GWF connection number for this lake connection entry. ICONN must be greater than zero and less than or equal to NLAKECONN for lake IFNO. numeric_index true block connectiondata @@ -470,7 +470,7 @@ tagged false in_record true reader urword longname bed leakance -description character string or real value that defines the bed leakance for the lake-GWF connection. BEDLEAK must be greater than or equal to zero or specified to be NONE. If BEDLEAK is specified to be NONE, the lake-GWF connection conductance is solely a function of aquifer properties in the connected GWF cell and lakebed sediments are assumed to be absent. +description real value or character string that defines the bed leakance for the lake-GWF connection. BEDLEAK must be greater than or equal to zero, equal to the DNODATA value (3.0E+30), or specified to be NONE. If DNODATA or NONE is specified for BEDLEAK, the lake-GWF connection conductance is solely a function of aquifer properties in the connected GWF cell and lakebed sediments are assumed to be absent. Warning messages will be issued if NONE is specified. Eventually the ability to specify NONE will be deprecated and cause MODFLOW 6 to terminate with an error. block connectiondata name belev @@ -517,21 +517,21 @@ description real value that defines the connection face width for a HORIZONTAL l block tables name tables -type recarray lakeno tab6 filein tab6_filename +type recarray ifno tab6 filein tab6_filename shape (ntables) reader urword longname description block tables -name lakeno +name ifno type integer shape tagged false in_record true reader urword longname lake number for this entry -description integer value that defines the lake number associated with the specified TABLES data on the line. LAKENO must be greater than zero and less than or equal to NLAKES. The program will terminate with an error if table information for a lake is specified more than once or the number of specified tables is less than NTABLES. +description integer value that defines the feature (lake) number associated with the specified TABLES data on the line. IFNO must be greater than zero and less than or equal to NLAKES. The program will terminate with an error if table information for a lake is specified more than once or the number of specified tables is less than NTABLES. numeric_index true block tables @@ -796,7 +796,7 @@ in_record true reader urword time_series true longname extraction rate -description real or character value that defines the extraction rate for the lake outflow. A positive value indicates inflow and a negative value indicates outflow from the lake. RATE only applies to active (IBOUND $>$ 0) lakes. A specified RATE is only applied if COUTTYPE for the OUTLETNO is SPECIFIED. If the Options block includes a TIMESERIESFILE entry (see the ``Time-Variable Input'' section), values can be obtained from a time series by entering the time-series name in place of a numeric value. By default, the RATE for each SPECIFIED lake outlet is zero. +description real or character value that defines the extraction rate for the lake outflow. A positive value indicates inflow and a negative value indicates outflow from the lake. RATE only applies to outlets associated with active lakes (STATUS is ACTIVE). A specified RATE is only applied if COUTTYPE for the OUTLETNO is SPECIFIED. If the Options block includes a TIMESERIESFILE entry (see the ``Time-Variable Input'' section), values can be obtained from a time series by entering the time-series name in place of a numeric value. By default, the RATE for each SPECIFIED lake outlet is zero. block period name invert diff --git a/flopy/mf6/data/dfn/gwf-maw.dfn b/flopy/mf6/data/dfn/gwf-maw.dfn index bf64974c4a..d9bf3912e4 100644 --- a/flopy/mf6/data/dfn/gwf-maw.dfn +++ b/flopy/mf6/data/dfn/gwf-maw.dfn @@ -334,21 +334,21 @@ description integer value specifying the number of multi-aquifer wells that will block packagedata name packagedata -type recarray wellno radius bottom strt condeqn ngwfnodes aux boundname +type recarray ifno radius bottom strt condeqn ngwfnodes aux boundname shape (nmawwells) reader urword longname description block packagedata -name wellno +name ifno type integer shape tagged false in_record true reader urword longname well number for this entry -description integer value that defines the well number associated with the specified PACKAGEDATA data on the line. WELLNO must be greater than zero and less than or equal to NMAWWELLS. Multi-aquifer well information must be specified for every multi-aquifer well or the program will terminate with an error. The program will also terminate with an error if information for a multi-aquifer well is specified more than once. +description integer value that defines the feature (well) number associated with the specified PACKAGEDATA data on the line. IFNO must be greater than zero and less than or equal to NMAWWELLS. Multi-aquifer well information must be specified for every multi-aquifer well or the program will terminate with an error. The program will also terminate with an error if information for a multi-aquifer well is specified more than once. numeric_index true block packagedata @@ -399,7 +399,7 @@ tagged false in_record true reader urword longname number of connected GWF cells -description integer value that defines the number of GWF nodes connected to this (WELLNO) multi-aquifer well. NGWFNODES must be greater than zero. +description integer value that defines the number of GWF nodes connected to this (IFNO) multi-aquifer well. NGWFNODES must be greater than zero. block packagedata name aux @@ -429,20 +429,20 @@ description REPLACE boundname {'{#1}': 'multi-aquifer well'} block connectiondata name connectiondata -type recarray wellno icon cellid scrn_top scrn_bot hk_skin radius_skin +type recarray ifno icon cellid scrn_top scrn_bot hk_skin radius_skin reader urword longname description block connectiondata -name wellno +name ifno type integer shape tagged false in_record true reader urword longname well number for this entry -description integer value that defines the well number associated with the specified CONNECTIONDATA data on the line. WELLNO must be greater than zero and less than or equal to NMAWWELLS. Multi-aquifer well connection information must be specified for every multi-aquifer well connection to the GWF model (NGWFNODES) or the program will terminate with an error. The program will also terminate with an error if connection information for a multi-aquifer well connection to the GWF model is specified more than once. +description integer value that defines the feature (well) number associated with the specified CONNECTIONDATA data on the line. IFNO must be greater than zero and less than or equal to NMAWWELLS. Multi-aquifer well connection information must be specified for every multi-aquifer well connection to the GWF model (NGWFNODES) or the program will terminate with an error. The program will also terminate with an error if connection information for a multi-aquifer well connection to the GWF model is specified more than once. numeric_index true block connectiondata @@ -453,7 +453,7 @@ tagged false in_record true reader urword longname connection number -description integer value that defines the GWF connection number for this multi-aquifer well connection entry. ICONN must be greater than zero and less than or equal to NGWFNODES for multi-aquifer well WELLNO. +description integer value that defines the GWF connection number for this multi-aquifer well connection entry. ICONN must be greater than zero and less than or equal to NGWFNODES for multi-aquifer well IFNO. numeric_index true block connectiondata @@ -524,21 +524,21 @@ description REPLACE iper {} block period name perioddata -type recarray wellno mawsetting +type recarray ifno mawsetting shape reader urword longname description block period -name wellno +name ifno type integer shape tagged false in_record true reader urword longname well number for this entry -description integer value that defines the well number associated with the specified PERIOD data on the line. WELLNO must be greater than zero and less than or equal to NMAWWELLS. +description integer value that defines the well number associated with the specified PERIOD data on the line. IFNO must be greater than zero and less than or equal to NMAWWELLS. numeric_index true block period @@ -619,7 +619,7 @@ in_record true reader urword time_series true longname well pumping rate -description is the volumetric pumping rate for the multi-aquifer well. A positive value indicates recharge and a negative value indicates discharge (pumping). RATE only applies to active (IBOUND $>$ 0) multi-aquifer wells. If the Options block includes a TIMESERIESFILE entry (see the ``Time-Variable Input'' section), values can be obtained from a time series by entering the time-series name in place of a numeric value. By default, the RATE for each multi-aquifer well is zero. +description is the volumetric pumping rate for the multi-aquifer well. A positive value indicates recharge and a negative value indicates discharge (pumping). RATE only applies to active (STATUS is ACTIVE) multi-aquifer wells. If the Options block includes a TIMESERIESFILE entry (see the ``Time-Variable Input'' section), values can be obtained from a time series by entering the time-series name in place of a numeric value. By default, the RATE for each multi-aquifer well is zero. block period name well_head diff --git a/flopy/mf6/data/dfn/gwf-sfr.dfn b/flopy/mf6/data/dfn/gwf-sfr.dfn index 5a23b919e4..40048aeee5 100644 --- a/flopy/mf6/data/dfn/gwf-sfr.dfn +++ b/flopy/mf6/data/dfn/gwf-sfr.dfn @@ -342,21 +342,21 @@ description integer value specifying the number of stream reaches. There must b block packagedata name packagedata -type recarray rno cellid rlen rwid rgrd rtp rbth rhk man ncon ustrf ndv aux boundname +type recarray ifno cellid rlen rwid rgrd rtp rbth rhk man ncon ustrf ndv aux boundname shape (maxbound) reader urword longname description block packagedata -name rno +name ifno type integer shape tagged false in_record true reader urword longname reach number for this entry -description integer value that defines the reach number associated with the specified PACKAGEDATA data on the line. RNO must be greater than zero and less than or equal to NREACHES. Reach information must be specified for every reach or the program will terminate with an error. The program will also terminate with an error if information for a reach is specified more than once. +description integer value that defines the feature (reach) number associated with the specified PACKAGEDATA data on the line. IFNO must be greater than zero and less than or equal to NREACHES. Reach information must be specified for every reach or the program will terminate with an error. The program will also terminate with an error if information for a reach is specified more than once. numeric_index true block packagedata @@ -367,7 +367,7 @@ tagged false in_record true reader urword longname cell identifier -description The keyword `NONE' must be specified for reaches that are not connected to an underlying GWF cell. The keyword `NONE' is used for reaches that are in cells that have IDOMAIN values less than one or are in areas not covered by the GWF model grid. Reach-aquifer flow is not calculated if the keyword `NONE' is specified. +description is the cell identifier, and depends on the type of grid that is used for the simulation. For a structured grid that uses the DIS input file, CELLID is the layer, row, and column. For a grid that uses the DISV input file, CELLID is the layer and CELL2D number. If the model uses the unstructured discretization (DISU) input file, CELLID is the node number for the cell. For reaches that are not connected to an underlying GWF cell, a zero should be specified for each grid dimension. For example, for a DIS grid a CELLID of 0 0 0 should be specified. Reach-aquifer flow is not calculated for unconnected reaches. The keyword NONE can be still be specified to identify unconnected reaches for backward compatibility with previous versions of MODFLOW 6 but eventually NONE will be deprecated and will cause MODFLOW 6 to terminate with an error. block packagedata name rlen @@ -417,7 +417,7 @@ tagged false in_record true reader urword longname streambed thickness -description real value that defines the thickness of the reach streambed. RBTH can be any value if CELLID is `NONE'. Otherwise, RBTH must be greater than zero. +description real value that defines the thickness of the reach streambed. RBTH can be any value if the reach is not connected to an underlying GWF cell. Otherwise, RBTH must be greater than zero. block packagedata name rhk @@ -427,7 +427,7 @@ tagged false in_record true reader urword longname -description real value that defines the hydraulic conductivity of the reach streambed. RHK can be any positive value if CELLID is `NONE'. Otherwise, RHK must be greater than zero. +description real value that defines the hydraulic conductivity of the reach streambed. RHK can be any positive value if the reach is not connected to an underlying GWF cell. Otherwise, RHK must be greater than zero. block packagedata name man @@ -448,7 +448,7 @@ tagged false in_record true reader urword longname number of connected reaches -description integer value that defines the number of reaches connected to the reach. If a value of zero is specified for NCON an entry for RNO is still required in the subsequent CONNECTIONDATA block. +description integer value that defines the number of reaches connected to the reach. If a value of zero is specified for NCON an entry for IFNO is still required in the subsequent CONNECTIONDATA block. block packagedata name ustrf @@ -498,7 +498,7 @@ description REPLACE boundname {'{#1}': 'stream reach'} block crosssections name crosssections -type recarray rno tab6 filein tab6_filename +type recarray ifno tab6 filein tab6_filename shape valid optional false @@ -507,14 +507,14 @@ longname description block crosssections -name rno +name ifno type integer shape tagged false in_record true reader urword longname reach number for this entry -description integer value that defines the reach number associated with the specified cross-section table file on the line. RNO must be greater than zero and less than or equal to NREACHES. The program will also terminate with an error if table information for a reach is specified more than once. +description integer value that defines the feature (reach) number associated with the specified cross-section table file on the line. IFNO must be greater than zero and less than or equal to NREACHES. The program will also terminate with an error if table information for a reach is specified more than once. numeric_index true block crosssections @@ -554,27 +554,27 @@ description character string that defines the path and filename for the file con block connectiondata name connectiondata -type recarray rno ic +type recarray ifno ic shape (maxbound) reader urword longname description block connectiondata -name rno +name ifno type integer shape tagged false in_record true reader urword longname reach number for this entry -description integer value that defines the reach number associated with the specified CONNECTIONDATA data on the line. RNO must be greater than zero and less than or equal to NREACHES. Reach connection information must be specified for every reach or the program will terminate with an error. The program will also terminate with an error if connection information for a reach is specified more than once. +description integer value that defines the feature (reach) number associated with the specified CONNECTIONDATA data on the line. IFNO must be greater than zero and less than or equal to NREACHES. Reach connection information must be specified for every reach or the program will terminate with an error. The program will also terminate with an error if connection information for a reach is specified more than once. numeric_index true block connectiondata name ic type integer -shape (ncon(rno)) +shape (ncon(ifno)) tagged false in_record true reader urword @@ -590,21 +590,21 @@ support_negative_index true block diversions name diversions -type recarray rno idv iconr cprior +type recarray ifno idv iconr cprior shape (maxbound) reader urword longname description block diversions -name rno +name ifno type integer shape tagged false in_record true reader urword longname reach number for this entry -description integer value that defines the reach number associated with the specified DIVERSIONS data on the line. RNO must be greater than zero and less than or equal to NREACHES. Reach diversion information must be specified for every reach with a NDV value greater than 0 or the program will terminate with an error. The program will also terminate with an error if diversion information for a given reach diversion is specified more than once. +description integer value that defines the feature (reach) number associated with the specified DIVERSIONS data on the line. IFNO must be greater than zero and less than or equal to NREACHES. Reach diversion information must be specified for every reach with a NDV value greater than 0 or the program will terminate with an error. The program will also terminate with an error if diversion information for a given reach diversion is specified more than once. numeric_index true block diversions @@ -615,7 +615,7 @@ tagged false in_record true reader urword longname downstream diversion number -description integer value that defines the downstream diversion number for the diversion for reach RNO. IDV must be greater than zero and less than or equal to NDV for reach RNO. +description integer value that defines the downstream diversion number for the diversion for reach IFNO. IDV must be greater than zero and less than or equal to NDV for reach IFNO. numeric_index true block diversions @@ -626,7 +626,7 @@ tagged false in_record true reader urword longname downstream reach number for diversion -description integer value that defines the downstream reach that will receive the diverted water. IDV must be greater than zero and less than or equal to NREACHES. Furthermore, reach ICONR must be a downstream connection for reach RNO. +description integer value that defines the downstream reach that will receive the diverted water. IDV must be greater than zero and less than or equal to NREACHES. Furthermore, reach ICONR must be a downstream connection for reach IFNO. numeric_index true block diversions @@ -637,7 +637,7 @@ tagged false in_record true reader urword longname iprior code -description character string value that defines the the prioritization system for the diversion, such as when insufficient water is available to meet all diversion stipulations, and is used in conjunction with the value of FLOW value specified in the STRESS\_PERIOD\_DATA section. Available diversion options include: (1) CPRIOR = `FRACTION', then the amount of the diversion is computed as a fraction of the streamflow leaving reach RNO ($Q_{DS}$); in this case, 0.0 $\le$ DIVFLOW $\le$ 1.0. (2) CPRIOR = `EXCESS', a diversion is made only if $Q_{DS}$ for reach RNO exceeds the value of DIVFLOW. If this occurs, then the quantity of water diverted is the excess flow ($Q_{DS} -$ DIVFLOW) and $Q_{DS}$ from reach RNO is set equal to DIVFLOW. This represents a flood-control type of diversion, as described by Danskin and Hanson (2002). (3) CPRIOR = `THRESHOLD', then if $Q_{DS}$ in reach RNO is less than the specified diversion flow DIVFLOW, no water is diverted from reach RNO. If $Q_{DS}$ in reach RNO is greater than or equal to DIVFLOW, DIVFLOW is diverted and $Q_{DS}$ is set to the remainder ($Q_{DS} -$ DIVFLOW)). This approach assumes that once flow in the stream is sufficiently low, diversions from the stream cease, and is the `priority' algorithm that originally was programmed into the STR1 Package (Prudic, 1989). (4) CPRIOR = `UPTO' -- if $Q_{DS}$ in reach RNO is greater than or equal to the specified diversion flow DIVFLOW, $Q_{DS}$ is reduced by DIVFLOW. If $Q_{DS}$ in reach RNO is less than DIVFLOW, DIVFLOW is set to $Q_{DS}$ and there will be no flow available for reaches connected to downstream end of reach RNO. +description character string value that defines the the prioritization system for the diversion, such as when insufficient water is available to meet all diversion stipulations, and is used in conjunction with the value of FLOW value specified in the STRESS\_PERIOD\_DATA section. Available diversion options include: (1) CPRIOR = `FRACTION', then the amount of the diversion is computed as a fraction of the streamflow leaving reach IFNO ($Q_{DS}$); in this case, 0.0 $\le$ DIVFLOW $\le$ 1.0. (2) CPRIOR = `EXCESS', a diversion is made only if $Q_{DS}$ for reach IFNO exceeds the value of DIVFLOW. If this occurs, then the quantity of water diverted is the excess flow ($Q_{DS} -$ DIVFLOW) and $Q_{DS}$ from reach IFNO is set equal to DIVFLOW. This represents a flood-control type of diversion, as described by Danskin and Hanson (2002). (3) CPRIOR = `THRESHOLD', then if $Q_{DS}$ in reach IFNO is less than the specified diversion flow DIVFLOW, no water is diverted from reach IFNO. If $Q_{DS}$ in reach IFNO is greater than or equal to DIVFLOW, DIVFLOW is diverted and $Q_{DS}$ is set to the remainder ($Q_{DS} -$ DIVFLOW)). This approach assumes that once flow in the stream is sufficiently low, diversions from the stream cease, and is the `priority' algorithm that originally was programmed into the STR1 Package (Prudic, 1989). (4) CPRIOR = `UPTO' -- if $Q_{DS}$ in reach IFNO is greater than or equal to the specified diversion flow DIVFLOW, $Q_{DS}$ is reduced by DIVFLOW. If $Q_{DS}$ in reach IFNO is less than DIVFLOW, DIVFLOW is set to $Q_{DS}$ and there will be no flow available for reaches connected to downstream end of reach IFNO. # --------------------- gwf sfr period --------------------- @@ -657,21 +657,21 @@ description REPLACE iper {} block period name perioddata -type recarray rno sfrsetting +type recarray ifno sfrsetting shape reader urword longname description block period -name rno +name ifno type integer shape tagged false in_record true reader urword longname reach number for this entry -description integer value that defines the reach number associated with the specified PERIOD data on the line. RNO must be greater than zero and less than or equal to NREACHES. +description integer value that defines the feature (reach) number associated with the specified PERIOD data on the line. IFNO must be greater than zero and less than or equal to NREACHES. numeric_index true block period @@ -787,7 +787,7 @@ tagged false in_record true reader urword longname diversion number -description an integer value specifying which diversion of reach RNO that DIVFLOW is being specified for. Must be less or equal to ndv for the current reach (RNO). +description an integer value specifying which diversion of reach IFNO that DIVFLOW is being specified for. Must be less or equal to ndv for the current reach (IFNO). numeric_index true block period diff --git a/flopy/mf6/data/dfn/gwf-uzf.dfn b/flopy/mf6/data/dfn/gwf-uzf.dfn index 02833c0896..1b61e1fb6b 100644 --- a/flopy/mf6/data/dfn/gwf-uzf.dfn +++ b/flopy/mf6/data/dfn/gwf-uzf.dfn @@ -365,21 +365,21 @@ default_value 40 block packagedata name packagedata -type recarray iuzno cellid landflag ivertcon surfdep vks thtr thts thti eps boundname +type recarray ifno cellid landflag ivertcon surfdep vks thtr thts thti eps boundname shape (nuzfcells) reader urword longname description block packagedata -name iuzno +name ifno type integer shape tagged false in_record true reader urword longname uzf id number for this entry -description integer value that defines the UZF cell number associated with the specified PACKAGEDATA data on the line. IUZNO must be greater than zero and less than or equal to NUZFCELLS. UZF information must be specified for every UZF cell or the program will terminate with an error. The program will also terminate with an error if information for a UZF cell is specified more than once. +description integer value that defines the feature (UZF object) number associated with the specified PACKAGEDATA data on the line. IFNO must be greater than zero and less than or equal to NUZFCELLS. UZF information must be specified for every UZF cell or the program will terminate with an error. The program will also terminate with an error if information for a UZF cell is specified more than once. numeric_index true block packagedata @@ -502,21 +502,21 @@ description REPLACE iper {} block period name perioddata -type recarray iuzno finf pet extdp extwc ha hroot rootact aux +type recarray ifno finf pet extdp extwc ha hroot rootact aux shape reader urword longname description block period -name iuzno +name ifno type integer shape tagged false in_record true reader urword longname UZF id number -description integer value that defines the UZF cell number associated with the specified PERIOD data on the line. +description integer value that defines the feature (UZF object) number associated with the specified PERIOD data on the line. numeric_index true block period diff --git a/flopy/mf6/data/dfn/gwt-lkt.dfn b/flopy/mf6/data/dfn/gwt-lkt.dfn index 44fb4422ca..60b5aaf773 100644 --- a/flopy/mf6/data/dfn/gwt-lkt.dfn +++ b/flopy/mf6/data/dfn/gwt-lkt.dfn @@ -259,21 +259,21 @@ description REPLACE obs6_filename {'{#1}': 'LKT'} block packagedata name packagedata -type recarray lakeno strt aux boundname +type recarray ifno strt aux boundname shape (maxbound) reader urword longname description block packagedata -name lakeno +name ifno type integer shape tagged false in_record true reader urword longname lake number for this entry -description integer value that defines the lake number associated with the specified PACKAGEDATA data on the line. LAKENO must be greater than zero and less than or equal to NLAKES. Lake information must be specified for every lake or the program will terminate with an error. The program will also terminate with an error if information for a lake is specified more than once. +description integer value that defines the feature (lake) number associated with the specified PACKAGEDATA data on the line. IFNO must be greater than zero and less than or equal to NLAKES. Lake information must be specified for every lake or the program will terminate with an error. The program will also terminate with an error if information for a lake is specified more than once. numeric_index true block packagedata @@ -327,21 +327,21 @@ description REPLACE iper {} block period name lakeperioddata -type recarray lakeno laksetting +type recarray ifno laksetting shape reader urword longname description block period -name lakeno +name ifno type integer shape tagged false in_record true reader urword longname lake number for this entry -description integer value that defines the lake number associated with the specified PERIOD data on the line. LAKENO must be greater than zero and less than or equal to NLAKES. +description integer value that defines the feature (lake) number associated with the specified PERIOD data on the line. IFNO must be greater than zero and less than or equal to NLAKES. numeric_index true block period diff --git a/flopy/mf6/data/dfn/gwt-mwt.dfn b/flopy/mf6/data/dfn/gwt-mwt.dfn index 67529c4b8a..4eef9970b8 100644 --- a/flopy/mf6/data/dfn/gwt-mwt.dfn +++ b/flopy/mf6/data/dfn/gwt-mwt.dfn @@ -259,21 +259,21 @@ description REPLACE obs6_filename {'{#1}': 'MWT'} block packagedata name packagedata -type recarray mawno strt aux boundname +type recarray ifno strt aux boundname shape (maxbound) reader urword longname description block packagedata -name mawno +name ifno type integer shape tagged false in_record true reader urword longname well number for this entry -description integer value that defines the well number associated with the specified PACKAGEDATA data on the line. MAWNO must be greater than zero and less than or equal to NMAWWELLS. Well information must be specified for every well or the program will terminate with an error. The program will also terminate with an error if information for a well is specified more than once. +description integer value that defines the feature (well) number associated with the specified PACKAGEDATA data on the line. IFNO must be greater than zero and less than or equal to NMAWWELLS. Well information must be specified for every well or the program will terminate with an error. The program will also terminate with an error if information for a well is specified more than once. numeric_index true block packagedata @@ -327,21 +327,21 @@ description REPLACE iper {} block period name mwtperioddata -type recarray mawno mwtsetting +type recarray ifno mwtsetting shape reader urword longname description block period -name mawno +name ifno type integer shape tagged false in_record true reader urword longname well number for this entry -description integer value that defines the well number associated with the specified PERIOD data on the line. MAWNO must be greater than zero and less than or equal to NMAWWELLS. +description integer value that defines the feature (well) number associated with the specified PERIOD data on the line. IFNO must be greater than zero and less than or equal to NMAWWELLS. numeric_index true block period diff --git a/flopy/mf6/data/dfn/gwt-sft.dfn b/flopy/mf6/data/dfn/gwt-sft.dfn index dc98b6a2be..5323f4c7c5 100644 --- a/flopy/mf6/data/dfn/gwt-sft.dfn +++ b/flopy/mf6/data/dfn/gwt-sft.dfn @@ -259,21 +259,21 @@ description REPLACE obs6_filename {'{#1}': 'SFT'} block packagedata name packagedata -type recarray rno strt aux boundname +type recarray ifno strt aux boundname shape (maxbound) reader urword longname description block packagedata -name rno +name ifno type integer shape tagged false in_record true reader urword longname reach number for this entry -description integer value that defines the reach number associated with the specified PACKAGEDATA data on the line. RNO must be greater than zero and less than or equal to NREACHES. Reach information must be specified for every reach or the program will terminate with an error. The program will also terminate with an error if information for a reach is specified more than once. +description integer value that defines the feature (reach) number associated with the specified PACKAGEDATA data on the line. IFNO must be greater than zero and less than or equal to NREACHES. Reach information must be specified for every reach or the program will terminate with an error. The program will also terminate with an error if information for a reach is specified more than once. numeric_index true block packagedata @@ -327,21 +327,21 @@ description REPLACE iper {} block period name reachperioddata -type recarray rno reachsetting +type recarray ifno reachsetting shape reader urword longname description block period -name rno +name ifno type integer shape tagged false in_record true reader urword longname reach number for this entry -description integer value that defines the reach number associated with the specified PERIOD data on the line. RNO must be greater than zero and less than or equal to NREACHES. +description integer value that defines the feature (reach) number associated with the specified PERIOD data on the line. IFNO must be greater than zero and less than or equal to NREACHES. numeric_index true block period diff --git a/flopy/mf6/data/dfn/gwt-uzt.dfn b/flopy/mf6/data/dfn/gwt-uzt.dfn index d95baabe6e..00524848bd 100644 --- a/flopy/mf6/data/dfn/gwt-uzt.dfn +++ b/flopy/mf6/data/dfn/gwt-uzt.dfn @@ -259,21 +259,21 @@ description REPLACE obs6_filename {'{#1}': 'UZT'} block packagedata name packagedata -type recarray uzfno strt aux boundname +type recarray ifno strt aux boundname shape (maxbound) reader urword longname description block packagedata -name uzfno +name ifno type integer shape tagged false in_record true reader urword longname UZF cell number for this entry -description integer value that defines the UZF cell number associated with the specified PACKAGEDATA data on the line. UZFNO must be greater than zero and less than or equal to NUZFCELLS. Unsaturated zone flow information must be specified for every UZF cell or the program will terminate with an error. The program will also terminate with an error if information for a UZF cell is specified more than once. +description integer value that defines the feature (UZF object) number associated with the specified PACKAGEDATA data on the line. IFNO must be greater than zero and less than or equal to NUZFCELLS. Unsaturated zone flow information must be specified for every UZF cell or the program will terminate with an error. The program will also terminate with an error if information for a UZF cell is specified more than once. numeric_index true block packagedata @@ -327,21 +327,21 @@ description REPLACE iper {} block period name uztperioddata -type recarray uzfno uztsetting +type recarray ifno uztsetting shape reader urword longname description block period -name uzfno +name ifno type integer shape tagged false in_record true reader urword longname unsaturated zone flow cell number for this entry -description integer value that defines the UZF cell number associated with the specified PERIOD data on the line. UZFNO must be greater than zero and less than or equal to NUZFCELLS. +description integer value that defines the feature (UZF object) number associated with the specified PERIOD data on the line. IFNO must be greater than zero and less than or equal to NUZFCELLS. numeric_index true block period diff --git a/flopy/mf6/modflow/__init__.py b/flopy/mf6/modflow/__init__.py index 991b29d667..e5a5ce1668 100644 --- a/flopy/mf6/modflow/__init__.py +++ b/flopy/mf6/modflow/__init__.py @@ -58,6 +58,7 @@ from .mfmvr import ModflowMvr from .mfmvt import ModflowMvt from .mfnam import ModflowNam +from .mfsimulation import MFSimulation from .mftdis import ModflowTdis from .mfutlats import ModflowUtlats from .mfutllaktab import ModflowUtllaktab diff --git a/flopy/mf6/modflow/mfgwflak.py b/flopy/mf6/modflow/mfgwflak.py index 24570d2c0b..dc01e74385 100644 --- a/flopy/mf6/modflow/mfgwflak.py +++ b/flopy/mf6/modflow/mfgwflak.py @@ -1,6 +1,6 @@ # DO NOT MODIFY THIS FILE DIRECTLY. THIS FILE MUST BE CREATED BY # mf6/utils/createpackages.py -# FILE created on June 29, 2023 14:20:38 UTC +# FILE created on September 26, 2023 15:51:55 UTC from .. import mfpackage from ..data.mfdatautil import ListTemplateGenerator @@ -128,10 +128,10 @@ class ModflowGwflak(mfpackage.MFPackage): * ntables (integer) value specifying the number of lakes tables that will be used to define the lake stage, volume relation, and surface area. If NTABLES is not specified, a default value of zero is used. - packagedata : [lakeno, strt, nlakeconn, aux, boundname] - * lakeno (integer) integer value that defines the lake number - associated with the specified PACKAGEDATA data on the line. LAKENO - must be greater than zero and less than or equal to NLAKES. Lake + packagedata : [ifno, strt, nlakeconn, aux, boundname] + * ifno (integer) integer value that defines the feature (lake) number + associated with the specified PACKAGEDATA data on the line. IFNO must + be greater than zero and less than or equal to NLAKES. Lake information must be specified for every lake or the program will terminate with an error. The program will also terminate with an error if information for a lake is specified more than once. This @@ -142,7 +142,7 @@ class ModflowGwflak(mfpackage.MFPackage): * strt (double) real value that defines the starting stage for the lake. * nlakeconn (integer) integer value that defines the number of GWF - cells connected to this (LAKENO) lake. There can only be one vertical + cells connected to this (IFNO) lake. There can only be one vertical lake connection to each GWF cell. NLAKECONN must be greater than zero. * aux (double) represents the values of the auxiliary variables for @@ -157,10 +157,10 @@ class ModflowGwflak(mfpackage.MFPackage): character variable that can contain as many as 40 characters. If BOUNDNAME contains spaces in it, then the entire name must be enclosed within single quotes. - connectiondata : [lakeno, iconn, cellid, claktype, bedleak, belev, telev, + connectiondata : [ifno, iconn, cellid, claktype, bedleak, belev, telev, connlen, connwidth] - * lakeno (integer) integer value that defines the lake number - associated with the specified CONNECTIONDATA data on the line. LAKENO + * ifno (integer) integer value that defines the feature (lake) number + associated with the specified CONNECTIONDATA data on the line. IFNO must be greater than zero and less than or equal to NLAKES. Lake connection information must be specified for every lake connection to the GWF model (NLAKECONN) or the program will terminate with an @@ -172,7 +172,7 @@ class ModflowGwflak(mfpackage.MFPackage): and add one when writing index variables. * iconn (integer) integer value that defines the GWF connection number for this lake connection entry. ICONN must be greater than zero and - less than or equal to NLAKECONN for lake LAKENO. This argument is an + less than or equal to NLAKECONN for lake IFNO. This argument is an index variable, which means that it should be treated as zero-based when working with FloPy and Python. Flopy will automatically subtract one when loading index variables and add one when writing index @@ -207,12 +207,15 @@ class ModflowGwflak(mfpackage.MFPackage): CELLID in the NPF package. Embedded lakes can only be connected to a single cell (NLAKECONN = 1) and there must be a lake table associated with each embedded lake. - * bedleak (string) character string or real value that defines the bed + * bedleak (string) real value or character string that defines the bed leakance for the lake-GWF connection. BEDLEAK must be greater than or - equal to zero or specified to be NONE. If BEDLEAK is specified to be - NONE, the lake-GWF connection conductance is solely a function of - aquifer properties in the connected GWF cell and lakebed sediments - are assumed to be absent. + equal to zero, equal to the DNODATA value (3.0E+30), or specified to + be NONE. If DNODATA or NONE is specified for BEDLEAK, the lake-GWF + connection conductance is solely a function of aquifer properties in + the connected GWF cell and lakebed sediments are assumed to be + absent. Warning messages will be issued if NONE is specified. + Eventually the ability to specify NONE will be deprecated and cause + MODFLOW 6 to terminate with an error. * belev (double) real value that defines the bottom elevation for a HORIZONTAL lake-GWF connection. Any value can be specified if CLAKTYPE is VERTICAL, EMBEDDEDH, or EMBEDDEDV. If CLAKTYPE is @@ -234,9 +237,9 @@ class ModflowGwflak(mfpackage.MFPackage): for a HORIZONTAL lake-GWF connection. CONNWIDTH must be greater than zero for a HORIZONTAL lake-GWF connection. Any value can be specified if CLAKTYPE is VERTICAL, EMBEDDEDH, or EMBEDDEDV. - tables : [lakeno, tab6_filename] - * lakeno (integer) integer value that defines the lake number - associated with the specified TABLES data on the line. LAKENO must be + tables : [ifno, tab6_filename] + * ifno (integer) integer value that defines the feature (lake) number + associated with the specified TABLES data on the line. IFNO must be greater than zero and less than or equal to NLAKES. The program will terminate with an error if table information for a lake is specified more than once or the number of specified tables is less than @@ -373,13 +376,13 @@ class ModflowGwflak(mfpackage.MFPackage): * rate (string) real or character value that defines the extraction rate for the lake outflow. A positive value indicates inflow and a negative value indicates outflow from - the lake. RATE only applies to active (IBOUND > 0) lakes. A - specified RATE is only applied if COUTTYPE for the OUTLETNO - is SPECIFIED. If the Options block includes a TIMESERIESFILE - entry (see the "Time-Variable Input" section), values can be - obtained from a time series by entering the time-series name - in place of a numeric value. By default, the RATE for each - SPECIFIED lake outlet is zero. + the lake. RATE only applies to outlets associated with active + lakes (STATUS is ACTIVE). A specified RATE is only applied if + COUTTYPE for the OUTLETNO is SPECIFIED. If the Options block + includes a TIMESERIESFILE entry (see the "Time-Variable + Input" section), values can be obtained from a time series by + entering the time-series name in place of a numeric value. By + default, the RATE for each SPECIFIED lake outlet is zero. invert : [string] * invert (string) real or character value that defines the invert elevation for the lake outlet. A specified INVERT @@ -785,13 +788,13 @@ class ModflowGwflak(mfpackage.MFPackage): [ "block packagedata", "name packagedata", - "type recarray lakeno strt nlakeconn aux boundname", + "type recarray ifno strt nlakeconn aux boundname", "shape (maxbound)", "reader urword", ], [ "block packagedata", - "name lakeno", + "name ifno", "type integer", "shape", "tagged false", @@ -841,14 +844,14 @@ class ModflowGwflak(mfpackage.MFPackage): [ "block connectiondata", "name connectiondata", - "type recarray lakeno iconn cellid claktype bedleak belev telev " + "type recarray ifno iconn cellid claktype bedleak belev telev " "connlen connwidth", "shape (sum(nlakeconn))", "reader urword", ], [ "block connectiondata", - "name lakeno", + "name ifno", "type integer", "shape", "tagged false", @@ -932,13 +935,13 @@ class ModflowGwflak(mfpackage.MFPackage): [ "block tables", "name tables", - "type recarray lakeno tab6 filein tab6_filename", + "type recarray ifno tab6 filein tab6_filename", "shape (ntables)", "reader urword", ], [ "block tables", - "name lakeno", + "name ifno", "type integer", "shape", "tagged false", diff --git a/flopy/mf6/modflow/mfgwfmaw.py b/flopy/mf6/modflow/mfgwfmaw.py index 0f11efbe9a..51ca09dfd9 100644 --- a/flopy/mf6/modflow/mfgwfmaw.py +++ b/flopy/mf6/modflow/mfgwfmaw.py @@ -1,6 +1,6 @@ # DO NOT MODIFY THIS FILE DIRECTLY. THIS FILE MUST BE CREATED BY # mf6/utils/createpackages.py -# FILE created on June 29, 2023 14:20:38 UTC +# FILE created on September 26, 2023 15:51:55 UTC from .. import mfpackage from ..data.mfdatautil import ListTemplateGenerator @@ -125,11 +125,11 @@ class ModflowGwfmaw(mfpackage.MFPackage): nmawwells : integer * nmawwells (integer) integer value specifying the number of multi- aquifer wells that will be simulated for all stress periods. - packagedata : [wellno, radius, bottom, strt, condeqn, ngwfnodes, aux, + packagedata : [ifno, radius, bottom, strt, condeqn, ngwfnodes, aux, boundname] - * wellno (integer) integer value that defines the well number - associated with the specified PACKAGEDATA data on the line. WELLNO - must be greater than zero and less than or equal to NMAWWELLS. Multi- + * ifno (integer) integer value that defines the feature (well) number + associated with the specified PACKAGEDATA data on the line. IFNO must + be greater than zero and less than or equal to NMAWWELLS. Multi- aquifer well information must be specified for every multi-aquifer well or the program will terminate with an error. The program will also terminate with an error if information for a multi-aquifer well @@ -176,8 +176,8 @@ class ModflowGwfmaw(mfpackage.MFPackage): error condition occurs, it is suggested that the THEIM or MEAN conductance equations be used for these multi-aquifer wells. * ngwfnodes (integer) integer value that defines the number of GWF - nodes connected to this (WELLNO) multi-aquifer well. NGWFNODES must - be greater than zero. + nodes connected to this (IFNO) multi-aquifer well. NGWFNODES must be + greater than zero. * aux (double) represents the values of the auxiliary variables for each multi-aquifer well. The values of auxiliary variables must be present for each multi-aquifer well. The values must be specified in @@ -190,10 +190,10 @@ class ModflowGwfmaw(mfpackage.MFPackage): an ASCII character variable that can contain as many as 40 characters. If BOUNDNAME contains spaces in it, then the entire name must be enclosed within single quotes. - connectiondata : [wellno, icon, cellid, scrn_top, scrn_bot, hk_skin, + connectiondata : [ifno, icon, cellid, scrn_top, scrn_bot, hk_skin, radius_skin] - * wellno (integer) integer value that defines the well number - associated with the specified CONNECTIONDATA data on the line. WELLNO + * ifno (integer) integer value that defines the feature (well) number + associated with the specified CONNECTIONDATA data on the line. IFNO must be greater than zero and less than or equal to NMAWWELLS. Multi- aquifer well connection information must be specified for every multi-aquifer well connection to the GWF model (NGWFNODES) or the @@ -207,10 +207,10 @@ class ModflowGwfmaw(mfpackage.MFPackage): * icon (integer) integer value that defines the GWF connection number for this multi-aquifer well connection entry. ICONN must be greater than zero and less than or equal to NGWFNODES for multi-aquifer well - WELLNO. This argument is an index variable, which means that it - should be treated as zero-based when working with FloPy and Python. - Flopy will automatically subtract one when loading index variables - and add one when writing index variables. + IFNO. This argument is an index variable, which means that it should + be treated as zero-based when working with FloPy and Python. Flopy + will automatically subtract one when loading index variables and add + one when writing index variables. * cellid ((integer, ...)) is the cell identifier, and depends on the type of grid that is used for the simulation. For a structured grid that uses the DIS input file, CELLID is the layer, row, and column. @@ -259,14 +259,14 @@ class ModflowGwfmaw(mfpackage.MFPackage): if CONDEQN is SPECIFIED or THIEM. If CONDEQN is SKIN, CUMULATIVE, or MEAN, the program will terminate with an error if RADIUS_SKIN is less than or equal to the RADIUS for the multi-aquifer well. - perioddata : [wellno, mawsetting] - * wellno (integer) integer value that defines the well number - associated with the specified PERIOD data on the line. WELLNO must be - greater than zero and less than or equal to NMAWWELLS. This argument - is an index variable, which means that it should be treated as zero- - based when working with FloPy and Python. Flopy will automatically - subtract one when loading index variables and add one when writing - index variables. + perioddata : [ifno, mawsetting] + * ifno (integer) integer value that defines the well number associated + with the specified PERIOD data on the line. IFNO must be greater than + zero and less than or equal to NMAWWELLS. This argument is an index + variable, which means that it should be treated as zero-based when + working with FloPy and Python. Flopy will automatically subtract one + when loading index variables and add one when writing index + variables. * mawsetting (keystring) line of information that is parsed into a keyword and values. Keyword values that can be used to start the MAWSETTING string include: STATUS, FLOWING_WELL, RATE, WELL_HEAD, @@ -291,12 +291,12 @@ class ModflowGwfmaw(mfpackage.MFPackage): * rate (double) is the volumetric pumping rate for the multi- aquifer well. A positive value indicates recharge and a negative value indicates discharge (pumping). RATE only - applies to active (IBOUND > 0) multi-aquifer wells. If the - Options block includes a TIMESERIESFILE entry (see the "Time- - Variable Input" section), values can be obtained from a time - series by entering the time-series name in place of a numeric - value. By default, the RATE for each multi-aquifer well is - zero. + applies to active (STATUS is ACTIVE) multi-aquifer wells. If + the Options block includes a TIMESERIESFILE entry (see the + "Time-Variable Input" section), values can be obtained from a + time series by entering the time-series name in place of a + numeric value. By default, the RATE for each multi-aquifer + well is zero. well_head : [double] * well_head (double) is the head in the multi-aquifer well. WELL_HEAD is only applied to constant head (STATUS is @@ -700,14 +700,14 @@ class ModflowGwfmaw(mfpackage.MFPackage): [ "block packagedata", "name packagedata", - "type recarray wellno radius bottom strt condeqn ngwfnodes aux " + "type recarray ifno radius bottom strt condeqn ngwfnodes aux " "boundname", "shape (nmawwells)", "reader urword", ], [ "block packagedata", - "name wellno", + "name ifno", "type integer", "shape", "tagged false", @@ -784,13 +784,13 @@ class ModflowGwfmaw(mfpackage.MFPackage): [ "block connectiondata", "name connectiondata", - "type recarray wellno icon cellid scrn_top scrn_bot hk_skin " + "type recarray ifno icon cellid scrn_top scrn_bot hk_skin " "radius_skin", "reader urword", ], [ "block connectiondata", - "name wellno", + "name ifno", "type integer", "shape", "tagged false", @@ -868,13 +868,13 @@ class ModflowGwfmaw(mfpackage.MFPackage): [ "block period", "name perioddata", - "type recarray wellno mawsetting", + "type recarray ifno mawsetting", "shape", "reader urword", ], [ "block period", - "name wellno", + "name ifno", "type integer", "shape", "tagged false", diff --git a/flopy/mf6/modflow/mfgwfsfr.py b/flopy/mf6/modflow/mfgwfsfr.py index 0ff6e66df4..e32fb7f3bd 100644 --- a/flopy/mf6/modflow/mfgwfsfr.py +++ b/flopy/mf6/modflow/mfgwfsfr.py @@ -1,6 +1,6 @@ # DO NOT MODIFY THIS FILE DIRECTLY. THIS FILE MUST BE CREATED BY # mf6/utils/createpackages.py -# FILE created on June 29, 2023 14:20:38 UTC +# FILE created on September 26, 2023 15:51:55 UTC from .. import mfpackage from ..data.mfdatautil import ListTemplateGenerator @@ -131,27 +131,35 @@ class ModflowGwfsfr(mfpackage.MFPackage): nreaches : integer * nreaches (integer) integer value specifying the number of stream reaches. There must be NREACHES entries in the PACKAGEDATA block. - packagedata : [rno, cellid, rlen, rwid, rgrd, rtp, rbth, rhk, man, ncon, + packagedata : [ifno, cellid, rlen, rwid, rgrd, rtp, rbth, rhk, man, ncon, ustrf, ndv, aux, boundname] - * rno (integer) integer value that defines the reach number associated - with the specified PACKAGEDATA data on the line. RNO must be greater - than zero and less than or equal to NREACHES. Reach information must - be specified for every reach or the program will terminate with an - error. The program will also terminate with an error if information - for a reach is specified more than once. This argument is an index - variable, which means that it should be treated as zero-based when - working with FloPy and Python. Flopy will automatically subtract one - when loading index variables and add one when writing index - variables. - * cellid ((integer, ...)) The keyword 'NONE' must be specified for - reaches that are not connected to an underlying GWF cell. The keyword - 'NONE' is used for reaches that are in cells that have IDOMAIN values - less than one or are in areas not covered by the GWF model grid. - Reach-aquifer flow is not calculated if the keyword 'NONE' is - specified. This argument is an index variable, which means that it - should be treated as zero-based when working with FloPy and Python. - Flopy will automatically subtract one when loading index variables - and add one when writing index variables. + * ifno (integer) integer value that defines the feature (reach) number + associated with the specified PACKAGEDATA data on the line. IFNO must + be greater than zero and less than or equal to NREACHES. Reach + information must be specified for every reach or the program will + terminate with an error. The program will also terminate with an + error if information for a reach is specified more than once. This + argument is an index variable, which means that it should be treated + as zero-based when working with FloPy and Python. Flopy will + automatically subtract one when loading index variables and add one + when writing index variables. + * cellid ((integer, ...)) is the cell identifier, and depends on the + type of grid that is used for the simulation. For a structured grid + that uses the DIS input file, CELLID is the layer, row, and column. + For a grid that uses the DISV input file, CELLID is the layer and + CELL2D number. If the model uses the unstructured discretization + (DISU) input file, CELLID is the node number for the cell. For + reaches that are not connected to an underlying GWF cell, a zero + should be specified for each grid dimension. For example, for a DIS + grid a CELLID of 0 0 0 should be specified. Reach-aquifer flow is not + calculated for unconnected reaches. The keyword NONE can be still be + specified to identify unconnected reaches for backward compatibility + with previous versions of MODFLOW 6 but eventually NONE will be + deprecated and will cause MODFLOW 6 to terminate with an error. This + argument is an index variable, which means that it should be treated + as zero-based when working with FloPy and Python. Flopy will + automatically subtract one when loading index variables and add one + when writing index variables. * rlen (double) real value that defines the reach length. RLEN must be greater than zero. * rwid (double) real value that defines the reach width. RWID must be @@ -161,11 +169,12 @@ class ModflowGwfsfr(mfpackage.MFPackage): * rtp (double) real value that defines the bottom elevation of the reach. * rbth (double) real value that defines the thickness of the reach - streambed. RBTH can be any value if CELLID is 'NONE'. Otherwise, RBTH - must be greater than zero. + streambed. RBTH can be any value if the reach is not connected to an + underlying GWF cell. Otherwise, RBTH must be greater than zero. * rhk (double) real value that defines the hydraulic conductivity of - the reach streambed. RHK can be any positive value if CELLID is - 'NONE'. Otherwise, RHK must be greater than zero. + the reach streambed. RHK can be any positive value if the reach is + not connected to an underlying GWF cell. Otherwise, RHK must be + greater than zero. * man (string) real or character value that defines the Manning's roughness coefficient for the reach. MAN must be greater than zero. If the Options block includes a TIMESERIESFILE entry (see the "Time- @@ -173,7 +182,7 @@ class ModflowGwfsfr(mfpackage.MFPackage): by entering the time-series name in place of a numeric value. * ncon (integer) integer value that defines the number of reaches connected to the reach. If a value of zero is specified for NCON an - entry for RNO is still required in the subsequent CONNECTIONDATA + entry for IFNO is still required in the subsequent CONNECTIONDATA block. * ustrf (double) real value that defines the fraction of upstream flow from each upstream reach that is applied as upstream inflow to the @@ -197,15 +206,16 @@ class ModflowGwfsfr(mfpackage.MFPackage): ASCII character variable that can contain as many as 40 characters. If BOUNDNAME contains spaces in it, then the entire name must be enclosed within single quotes. - crosssections : [rno, tab6_filename] - * rno (integer) integer value that defines the reach number associated - with the specified cross-section table file on the line. RNO must be - greater than zero and less than or equal to NREACHES. The program - will also terminate with an error if table information for a reach is - specified more than once. This argument is an index variable, which - means that it should be treated as zero-based when working with FloPy - and Python. Flopy will automatically subtract one when loading index - variables and add one when writing index variables. + crosssections : [ifno, tab6_filename] + * ifno (integer) integer value that defines the feature (reach) number + associated with the specified cross-section table file on the line. + IFNO must be greater than zero and less than or equal to NREACHES. + The program will also terminate with an error if table information + for a reach is specified more than once. This argument is an index + variable, which means that it should be treated as zero-based when + working with FloPy and Python. Flopy will automatically subtract one + when loading index variables and add one when writing index + variables. * tab6_filename (string) character string that defines the path and filename for the file containing cross-section table data for the reach. The TAB6_FILENAME file includes the number of entries in the @@ -213,10 +223,10 @@ class ModflowGwfsfr(mfpackage.MFPackage): and the reach depth. Instructions for creating the TAB6_FILENAME input file are provided in SFR Reach Cross-Section Table Input File section. - connectiondata : [rno, ic] - * rno (integer) integer value that defines the reach number associated - with the specified CONNECTIONDATA data on the line. RNO must be - greater than zero and less than or equal to NREACHES. Reach + connectiondata : [ifno, ic] + * ifno (integer) integer value that defines the feature (reach) number + associated with the specified CONNECTIONDATA data on the line. IFNO + must be greater than zero and less than or equal to NREACHES. Reach connection information must be specified for every reach or the program will terminate with an error. The program will also terminate with an error if connection information for a reach is specified more @@ -236,12 +246,12 @@ class ModflowGwfsfr(mfpackage.MFPackage): that it should be treated as zero-based when working with FloPy and Python. Flopy will automatically subtract one when loading index variables and add one when writing index variables. - diversions : [rno, idv, iconr, cprior] - * rno (integer) integer value that defines the reach number associated - with the specified DIVERSIONS data on the line. RNO must be greater - than zero and less than or equal to NREACHES. Reach diversion - information must be specified for every reach with a NDV value - greater than 0 or the program will terminate with an error. The + diversions : [ifno, idv, iconr, cprior] + * ifno (integer) integer value that defines the feature (reach) number + associated with the specified DIVERSIONS data on the line. IFNO must + be greater than zero and less than or equal to NREACHES. Reach + diversion information must be specified for every reach with a NDV + value greater than 0 or the program will terminate with an error. The program will also terminate with an error if diversion information for a given reach diversion is specified more than once. This argument is an index variable, which means that it should be treated @@ -249,16 +259,16 @@ class ModflowGwfsfr(mfpackage.MFPackage): automatically subtract one when loading index variables and add one when writing index variables. * idv (integer) integer value that defines the downstream diversion - number for the diversion for reach RNO. IDV must be greater than zero - and less than or equal to NDV for reach RNO. This argument is an - index variable, which means that it should be treated as zero-based - when working with FloPy and Python. Flopy will automatically subtract - one when loading index variables and add one when writing index - variables. + number for the diversion for reach IFNO. IDV must be greater than + zero and less than or equal to NDV for reach IFNO. This argument is + an index variable, which means that it should be treated as zero- + based when working with FloPy and Python. Flopy will automatically + subtract one when loading index variables and add one when writing + index variables. * iconr (integer) integer value that defines the downstream reach that will receive the diverted water. IDV must be greater than zero and less than or equal to NREACHES. Furthermore, reach ICONR must be a - downstream connection for reach RNO. This argument is an index + downstream connection for reach IFNO. This argument is an index variable, which means that it should be treated as zero-based when working with FloPy and Python. Flopy will automatically subtract one when loading index variables and add one when writing index @@ -269,35 +279,35 @@ class ModflowGwfsfr(mfpackage.MFPackage): conjunction with the value of FLOW value specified in the STRESS_PERIOD_DATA section. Available diversion options include: (1) CPRIOR = 'FRACTION', then the amount of the diversion is computed as - a fraction of the streamflow leaving reach RNO (:math:`Q_{DS}`); in + a fraction of the streamflow leaving reach IFNO (:math:`Q_{DS}`); in this case, 0.0 :math:`\\le` DIVFLOW :math:`\\le` 1.0. (2) CPRIOR = - 'EXCESS', a diversion is made only if :math:`Q_{DS}` for reach RNO + 'EXCESS', a diversion is made only if :math:`Q_{DS}` for reach IFNO exceeds the value of DIVFLOW. If this occurs, then the quantity of water diverted is the excess flow (:math:`Q_{DS} -` DIVFLOW) and - :math:`Q_{DS}` from reach RNO is set equal to DIVFLOW. This + :math:`Q_{DS}` from reach IFNO is set equal to DIVFLOW. This represents a flood-control type of diversion, as described by Danskin and Hanson (2002). (3) CPRIOR = 'THRESHOLD', then if :math:`Q_{DS}` - in reach RNO is less than the specified diversion flow DIVFLOW, no - water is diverted from reach RNO. If :math:`Q_{DS}` in reach RNO is + in reach IFNO is less than the specified diversion flow DIVFLOW, no + water is diverted from reach IFNO. If :math:`Q_{DS}` in reach IFNO is greater than or equal to DIVFLOW, DIVFLOW is diverted and :math:`Q_{DS}` is set to the remainder (:math:`Q_{DS} -` DIVFLOW)). This approach assumes that once flow in the stream is sufficiently low, diversions from the stream cease, and is the 'priority' algorithm that originally was programmed into the STR1 Package - (Prudic, 1989). (4) CPRIOR = 'UPTO' -- if :math:`Q_{DS}` in reach RNO - is greater than or equal to the specified diversion flow DIVFLOW, - :math:`Q_{DS}` is reduced by DIVFLOW. If :math:`Q_{DS}` in reach RNO - is less than DIVFLOW, DIVFLOW is set to :math:`Q_{DS}` and there will - be no flow available for reaches connected to downstream end of reach - RNO. - perioddata : [rno, sfrsetting] - * rno (integer) integer value that defines the reach number associated - with the specified PERIOD data on the line. RNO must be greater than - zero and less than or equal to NREACHES. This argument is an index - variable, which means that it should be treated as zero-based when - working with FloPy and Python. Flopy will automatically subtract one - when loading index variables and add one when writing index - variables. + (Prudic, 1989). (4) CPRIOR = 'UPTO' -- if :math:`Q_{DS}` in reach + IFNO is greater than or equal to the specified diversion flow + DIVFLOW, :math:`Q_{DS}` is reduced by DIVFLOW. If :math:`Q_{DS}` in + reach IFNO is less than DIVFLOW, DIVFLOW is set to :math:`Q_{DS}` and + there will be no flow available for reaches connected to downstream + end of reach IFNO. + perioddata : [ifno, sfrsetting] + * ifno (integer) integer value that defines the feature (reach) number + associated with the specified PERIOD data on the line. IFNO must be + greater than zero and less than or equal to NREACHES. This argument + is an index variable, which means that it should be treated as zero- + based when working with FloPy and Python. Flopy will automatically + subtract one when loading index variables and add one when writing + index variables. * sfrsetting (keystring) line of information that is parsed into a keyword and values. Keyword values that can be used to start the SFRSETTING string include: STATUS, MANNING, STAGE, INFLOW, RAINFALL, @@ -381,10 +391,10 @@ class ModflowGwfsfr(mfpackage.MFPackage): zero. By default, runoff rates are zero for each reach. diversionrecord : [idv, divflow] * idv (integer) an integer value specifying which diversion of - reach RNO that DIVFLOW is being specified for. Must be less - or equal to ndv for the current reach (RNO). This argument is - an index variable, which means that it should be treated as - zero-based when working with FloPy and Python. Flopy will + reach IFNO that DIVFLOW is being specified for. Must be less + or equal to ndv for the current reach (IFNO). This argument + is an index variable, which means that it should be treated + as zero-based when working with FloPy and Python. Flopy will automatically subtract one when loading index variables and add one when writing index variables. * divflow (double) real or character value that defines the @@ -779,14 +789,14 @@ class ModflowGwfsfr(mfpackage.MFPackage): [ "block packagedata", "name packagedata", - "type recarray rno cellid rlen rwid rgrd rtp rbth rhk man ncon " + "type recarray ifno cellid rlen rwid rgrd rtp rbth rhk man ncon " "ustrf ndv aux boundname", "shape (maxbound)", "reader urword", ], [ "block packagedata", - "name rno", + "name ifno", "type integer", "shape", "tagged false", @@ -919,7 +929,7 @@ class ModflowGwfsfr(mfpackage.MFPackage): [ "block crosssections", "name crosssections", - "type recarray rno tab6 filein tab6_filename", + "type recarray ifno tab6 filein tab6_filename", "shape", "valid", "optional false", @@ -927,7 +937,7 @@ class ModflowGwfsfr(mfpackage.MFPackage): ], [ "block crosssections", - "name rno", + "name ifno", "type integer", "shape", "tagged false", @@ -968,13 +978,13 @@ class ModflowGwfsfr(mfpackage.MFPackage): [ "block connectiondata", "name connectiondata", - "type recarray rno ic", + "type recarray ifno ic", "shape (maxbound)", "reader urword", ], [ "block connectiondata", - "name rno", + "name ifno", "type integer", "shape", "tagged false", @@ -986,7 +996,7 @@ class ModflowGwfsfr(mfpackage.MFPackage): "block connectiondata", "name ic", "type integer", - "shape (ncon(rno))", + "shape (ncon(ifno))", "tagged false", "in_record true", "reader urword", @@ -997,13 +1007,13 @@ class ModflowGwfsfr(mfpackage.MFPackage): [ "block diversions", "name diversions", - "type recarray rno idv iconr cprior", + "type recarray ifno idv iconr cprior", "shape (maxbound)", "reader urword", ], [ "block diversions", - "name rno", + "name ifno", "type integer", "shape", "tagged false", @@ -1055,13 +1065,13 @@ class ModflowGwfsfr(mfpackage.MFPackage): [ "block period", "name perioddata", - "type recarray rno sfrsetting", + "type recarray ifno sfrsetting", "shape", "reader urword", ], [ "block period", - "name rno", + "name ifno", "type integer", "shape", "tagged false", diff --git a/flopy/mf6/modflow/mfgwfuzf.py b/flopy/mf6/modflow/mfgwfuzf.py index 085daefd32..dd845bdaa6 100644 --- a/flopy/mf6/modflow/mfgwfuzf.py +++ b/flopy/mf6/modflow/mfgwfuzf.py @@ -1,6 +1,6 @@ # DO NOT MODIFY THIS FILE DIRECTLY. THIS FILE MUST BE CREATED BY # mf6/utils/createpackages.py -# FILE created on June 29, 2023 14:20:38 UTC +# FILE created on September 26, 2023 15:51:55 UTC from .. import mfpackage from ..data.mfdatautil import ListTemplateGenerator @@ -125,13 +125,13 @@ class ModflowGwfuzf(mfpackage.MFPackage): of 40 can be used for NWAVESETS. This value can be increased if more waves are required to resolve variations in water content within the unsaturated zone. - packagedata : [iuzno, cellid, landflag, ivertcon, surfdep, vks, thtr, thts, + packagedata : [ifno, cellid, landflag, ivertcon, surfdep, vks, thtr, thts, thti, eps, boundname] - * iuzno (integer) integer value that defines the UZF cell number - associated with the specified PACKAGEDATA data on the line. IUZNO - must be greater than zero and less than or equal to NUZFCELLS. UZF - information must be specified for every UZF cell or the program will - terminate with an error. The program will also terminate with an + * ifno (integer) integer value that defines the feature (UZF object) + number associated with the specified PACKAGEDATA data on the line. + IFNO must be greater than zero and less than or equal to NUZFCELLS. + UZF information must be specified for every UZF cell or the program + will terminate with an error. The program will also terminate with an error if information for a UZF cell is specified more than once. This argument is an index variable, which means that it should be treated as zero-based when working with FloPy and Python. Flopy will @@ -185,13 +185,13 @@ class ModflowGwfuzf(mfpackage.MFPackage): character variable that can contain as many as 40 characters. If BOUNDNAME contains spaces in it, then the entire name must be enclosed within single quotes. - perioddata : [iuzno, finf, pet, extdp, extwc, ha, hroot, rootact, aux] - * iuzno (integer) integer value that defines the UZF cell number - associated with the specified PERIOD data on the line. This argument - is an index variable, which means that it should be treated as zero- - based when working with FloPy and Python. Flopy will automatically - subtract one when loading index variables and add one when writing - index variables. + perioddata : [ifno, finf, pet, extdp, extwc, ha, hroot, rootact, aux] + * ifno (integer) integer value that defines the feature (UZF object) + number associated with the specified PERIOD data on the line. This + argument is an index variable, which means that it should be treated + as zero-based when working with FloPy and Python. Flopy will + automatically subtract one when loading index variables and add one + when writing index variables. * finf (string) real or character value that defines the applied infiltration rate of the UZF cell (:math:`LT^{-1}`). If the Options block includes a TIMESERIESFILE entry (see the "Time-Variable Input" @@ -632,14 +632,14 @@ class ModflowGwfuzf(mfpackage.MFPackage): [ "block packagedata", "name packagedata", - "type recarray iuzno cellid landflag ivertcon surfdep vks thtr " + "type recarray ifno cellid landflag ivertcon surfdep vks thtr " "thts thti eps boundname", "shape (nuzfcells)", "reader urword", ], [ "block packagedata", - "name iuzno", + "name ifno", "type integer", "shape", "tagged false", @@ -754,13 +754,13 @@ class ModflowGwfuzf(mfpackage.MFPackage): [ "block period", "name perioddata", - "type recarray iuzno finf pet extdp extwc ha hroot rootact aux", + "type recarray ifno finf pet extdp extwc ha hroot rootact aux", "shape", "reader urword", ], [ "block period", - "name iuzno", + "name ifno", "type integer", "shape", "tagged false", diff --git a/flopy/mf6/modflow/mfgwtlkt.py b/flopy/mf6/modflow/mfgwtlkt.py index 80c3abec15..b3a4c92bd8 100644 --- a/flopy/mf6/modflow/mfgwtlkt.py +++ b/flopy/mf6/modflow/mfgwtlkt.py @@ -1,6 +1,6 @@ # DO NOT MODIFY THIS FILE DIRECTLY. THIS FILE MUST BE CREATED BY # mf6/utils/createpackages.py -# FILE created on June 29, 2023 14:20:38 UTC +# FILE created on September 26, 2023 15:51:55 UTC from .. import mfpackage from ..data.mfdatautil import ListTemplateGenerator @@ -88,10 +88,10 @@ class ModflowGwtlkt(mfpackage.MFPackage): containing data for the obs package with variable names as keys and package data as values. Data just for the observations variable is also acceptable. See obs package documentation for more information. - packagedata : [lakeno, strt, aux, boundname] - * lakeno (integer) integer value that defines the lake number - associated with the specified PACKAGEDATA data on the line. LAKENO - must be greater than zero and less than or equal to NLAKES. Lake + packagedata : [ifno, strt, aux, boundname] + * ifno (integer) integer value that defines the feature (lake) number + associated with the specified PACKAGEDATA data on the line. IFNO must + be greater than zero and less than or equal to NLAKES. Lake information must be specified for every lake or the program will terminate with an error. The program will also terminate with an error if information for a lake is specified more than once. This @@ -113,9 +113,9 @@ class ModflowGwtlkt(mfpackage.MFPackage): character variable that can contain as many as 40 characters. If BOUNDNAME contains spaces in it, then the entire name must be enclosed within single quotes. - lakeperioddata : [lakeno, laksetting] - * lakeno (integer) integer value that defines the lake number - associated with the specified PERIOD data on the line. LAKENO must be + lakeperioddata : [ifno, laksetting] + * ifno (integer) integer value that defines the feature (lake) number + associated with the specified PERIOD data on the line. IFNO must be greater than zero and less than or equal to NLAKES. This argument is an index variable, which means that it should be treated as zero- based when working with FloPy and Python. Flopy will automatically @@ -469,13 +469,13 @@ class ModflowGwtlkt(mfpackage.MFPackage): [ "block packagedata", "name packagedata", - "type recarray lakeno strt aux boundname", + "type recarray ifno strt aux boundname", "shape (maxbound)", "reader urword", ], [ "block packagedata", - "name lakeno", + "name ifno", "type integer", "shape", "tagged false", @@ -528,13 +528,13 @@ class ModflowGwtlkt(mfpackage.MFPackage): [ "block period", "name lakeperioddata", - "type recarray lakeno laksetting", + "type recarray ifno laksetting", "shape", "reader urword", ], [ "block period", - "name lakeno", + "name ifno", "type integer", "shape", "tagged false", diff --git a/flopy/mf6/modflow/mfgwtmwt.py b/flopy/mf6/modflow/mfgwtmwt.py index 45cbfb2b6a..2d5929eb61 100644 --- a/flopy/mf6/modflow/mfgwtmwt.py +++ b/flopy/mf6/modflow/mfgwtmwt.py @@ -1,6 +1,6 @@ # DO NOT MODIFY THIS FILE DIRECTLY. THIS FILE MUST BE CREATED BY # mf6/utils/createpackages.py -# FILE created on June 29, 2023 14:20:38 UTC +# FILE created on September 26, 2023 15:51:55 UTC from .. import mfpackage from ..data.mfdatautil import ListTemplateGenerator @@ -88,10 +88,10 @@ class ModflowGwtmwt(mfpackage.MFPackage): containing data for the obs package with variable names as keys and package data as values. Data just for the observations variable is also acceptable. See obs package documentation for more information. - packagedata : [mawno, strt, aux, boundname] - * mawno (integer) integer value that defines the well number associated - with the specified PACKAGEDATA data on the line. MAWNO must be - greater than zero and less than or equal to NMAWWELLS. Well + packagedata : [ifno, strt, aux, boundname] + * ifno (integer) integer value that defines the feature (well) number + associated with the specified PACKAGEDATA data on the line. IFNO must + be greater than zero and less than or equal to NMAWWELLS. Well information must be specified for every well or the program will terminate with an error. The program will also terminate with an error if information for a well is specified more than once. This @@ -113,14 +113,14 @@ class ModflowGwtmwt(mfpackage.MFPackage): character variable that can contain as many as 40 characters. If BOUNDNAME contains spaces in it, then the entire name must be enclosed within single quotes. - mwtperioddata : [mawno, mwtsetting] - * mawno (integer) integer value that defines the well number associated - with the specified PERIOD data on the line. MAWNO must be greater - than zero and less than or equal to NMAWWELLS. This argument is an - index variable, which means that it should be treated as zero-based - when working with FloPy and Python. Flopy will automatically subtract - one when loading index variables and add one when writing index - variables. + mwtperioddata : [ifno, mwtsetting] + * ifno (integer) integer value that defines the feature (well) number + associated with the specified PERIOD data on the line. IFNO must be + greater than zero and less than or equal to NMAWWELLS. This argument + is an index variable, which means that it should be treated as zero- + based when working with FloPy and Python. Flopy will automatically + subtract one when loading index variables and add one when writing + index variables. * mwtsetting (keystring) line of information that is parsed into a keyword and values. Keyword values that can be used to start the MWTSETTING string include: STATUS, CONCENTRATION, RAINFALL, @@ -444,13 +444,13 @@ class ModflowGwtmwt(mfpackage.MFPackage): [ "block packagedata", "name packagedata", - "type recarray mawno strt aux boundname", + "type recarray ifno strt aux boundname", "shape (maxbound)", "reader urword", ], [ "block packagedata", - "name mawno", + "name ifno", "type integer", "shape", "tagged false", @@ -503,13 +503,13 @@ class ModflowGwtmwt(mfpackage.MFPackage): [ "block period", "name mwtperioddata", - "type recarray mawno mwtsetting", + "type recarray ifno mwtsetting", "shape", "reader urword", ], [ "block period", - "name mawno", + "name ifno", "type integer", "shape", "tagged false", diff --git a/flopy/mf6/modflow/mfgwtsft.py b/flopy/mf6/modflow/mfgwtsft.py index 9b8f50b005..00814b1757 100644 --- a/flopy/mf6/modflow/mfgwtsft.py +++ b/flopy/mf6/modflow/mfgwtsft.py @@ -1,6 +1,6 @@ # DO NOT MODIFY THIS FILE DIRECTLY. THIS FILE MUST BE CREATED BY # mf6/utils/createpackages.py -# FILE created on June 29, 2023 14:20:38 UTC +# FILE created on September 26, 2023 15:51:55 UTC from .. import mfpackage from ..data.mfdatautil import ListTemplateGenerator @@ -88,17 +88,17 @@ class ModflowGwtsft(mfpackage.MFPackage): containing data for the obs package with variable names as keys and package data as values. Data just for the observations variable is also acceptable. See obs package documentation for more information. - packagedata : [rno, strt, aux, boundname] - * rno (integer) integer value that defines the reach number associated - with the specified PACKAGEDATA data on the line. RNO must be greater - than zero and less than or equal to NREACHES. Reach information must - be specified for every reach or the program will terminate with an - error. The program will also terminate with an error if information - for a reach is specified more than once. This argument is an index - variable, which means that it should be treated as zero-based when - working with FloPy and Python. Flopy will automatically subtract one - when loading index variables and add one when writing index - variables. + packagedata : [ifno, strt, aux, boundname] + * ifno (integer) integer value that defines the feature (reach) number + associated with the specified PACKAGEDATA data on the line. IFNO must + be greater than zero and less than or equal to NREACHES. Reach + information must be specified for every reach or the program will + terminate with an error. The program will also terminate with an + error if information for a reach is specified more than once. This + argument is an index variable, which means that it should be treated + as zero-based when working with FloPy and Python. Flopy will + automatically subtract one when loading index variables and add one + when writing index variables. * strt (double) real value that defines the starting concentration for the reach. * aux (double) represents the values of the auxiliary variables for @@ -113,14 +113,14 @@ class ModflowGwtsft(mfpackage.MFPackage): character variable that can contain as many as 40 characters. If BOUNDNAME contains spaces in it, then the entire name must be enclosed within single quotes. - reachperioddata : [rno, reachsetting] - * rno (integer) integer value that defines the reach number associated - with the specified PERIOD data on the line. RNO must be greater than - zero and less than or equal to NREACHES. This argument is an index - variable, which means that it should be treated as zero-based when - working with FloPy and Python. Flopy will automatically subtract one - when loading index variables and add one when writing index - variables. + reachperioddata : [ifno, reachsetting] + * ifno (integer) integer value that defines the feature (reach) number + associated with the specified PERIOD data on the line. IFNO must be + greater than zero and less than or equal to NREACHES. This argument + is an index variable, which means that it should be treated as zero- + based when working with FloPy and Python. Flopy will automatically + subtract one when loading index variables and add one when writing + index variables. * reachsetting (keystring) line of information that is parsed into a keyword and values. Keyword values that can be used to start the REACHSETTING string include: STATUS, CONCENTRATION, RAINFALL, @@ -467,13 +467,13 @@ class ModflowGwtsft(mfpackage.MFPackage): [ "block packagedata", "name packagedata", - "type recarray rno strt aux boundname", + "type recarray ifno strt aux boundname", "shape (maxbound)", "reader urword", ], [ "block packagedata", - "name rno", + "name ifno", "type integer", "shape", "tagged false", @@ -526,13 +526,13 @@ class ModflowGwtsft(mfpackage.MFPackage): [ "block period", "name reachperioddata", - "type recarray rno reachsetting", + "type recarray ifno reachsetting", "shape", "reader urword", ], [ "block period", - "name rno", + "name ifno", "type integer", "shape", "tagged false", diff --git a/flopy/mf6/modflow/mfgwtuzt.py b/flopy/mf6/modflow/mfgwtuzt.py index 4ef9ff3255..f627c50f57 100644 --- a/flopy/mf6/modflow/mfgwtuzt.py +++ b/flopy/mf6/modflow/mfgwtuzt.py @@ -1,6 +1,6 @@ # DO NOT MODIFY THIS FILE DIRECTLY. THIS FILE MUST BE CREATED BY # mf6/utils/createpackages.py -# FILE created on June 29, 2023 14:20:38 UTC +# FILE created on September 26, 2023 15:51:55 UTC from .. import mfpackage from ..data.mfdatautil import ListTemplateGenerator @@ -88,10 +88,10 @@ class ModflowGwtuzt(mfpackage.MFPackage): containing data for the obs package with variable names as keys and package data as values. Data just for the observations variable is also acceptable. See obs package documentation for more information. - packagedata : [uzfno, strt, aux, boundname] - * uzfno (integer) integer value that defines the UZF cell number - associated with the specified PACKAGEDATA data on the line. UZFNO - must be greater than zero and less than or equal to NUZFCELLS. + packagedata : [ifno, strt, aux, boundname] + * ifno (integer) integer value that defines the feature (UZF object) + number associated with the specified PACKAGEDATA data on the line. + IFNO must be greater than zero and less than or equal to NUZFCELLS. Unsaturated zone flow information must be specified for every UZF cell or the program will terminate with an error. The program will also terminate with an error if information for a UZF cell is @@ -113,14 +113,14 @@ class ModflowGwtuzt(mfpackage.MFPackage): is an ASCII character variable that can contain as many as 40 characters. If BOUNDNAME contains spaces in it, then the entire name must be enclosed within single quotes. - uztperioddata : [uzfno, uztsetting] - * uzfno (integer) integer value that defines the UZF cell number - associated with the specified PERIOD data on the line. UZFNO must be - greater than zero and less than or equal to NUZFCELLS. This argument - is an index variable, which means that it should be treated as zero- - based when working with FloPy and Python. Flopy will automatically - subtract one when loading index variables and add one when writing - index variables. + uztperioddata : [ifno, uztsetting] + * ifno (integer) integer value that defines the feature (UZF object) + number associated with the specified PERIOD data on the line. IFNO + must be greater than zero and less than or equal to NUZFCELLS. This + argument is an index variable, which means that it should be treated + as zero-based when working with FloPy and Python. Flopy will + automatically subtract one when loading index variables and add one + when writing index variables. * uztsetting (keystring) line of information that is parsed into a keyword and values. Keyword values that can be used to start the UZTSETTING string include: STATUS, CONCENTRATION, INFILTRATION, UZET, @@ -453,13 +453,13 @@ class ModflowGwtuzt(mfpackage.MFPackage): [ "block packagedata", "name packagedata", - "type recarray uzfno strt aux boundname", + "type recarray ifno strt aux boundname", "shape (maxbound)", "reader urword", ], [ "block packagedata", - "name uzfno", + "name ifno", "type integer", "shape", "tagged false", @@ -512,13 +512,13 @@ class ModflowGwtuzt(mfpackage.MFPackage): [ "block period", "name uztperioddata", - "type recarray uzfno uztsetting", + "type recarray ifno uztsetting", "shape", "reader urword", ], [ "block period", - "name uzfno", + "name ifno", "type integer", "shape", "tagged false", diff --git a/flopy/mf6/modflow/mfsimulation.py b/flopy/mf6/modflow/mfsimulation.py index cde15d3a4e..28fb749272 100644 --- a/flopy/mf6/modflow/mfsimulation.py +++ b/flopy/mf6/modflow/mfsimulation.py @@ -1,6 +1,6 @@ # DO NOT MODIFY THIS FILE DIRECTLY. THIS FILE MUST BE CREATED BY # mf6/utils/createpackages.py -# FILE created on June 22, 2023 21:13:41 UTC +# FILE created on September 26, 2023 15:51:55 UTC import os from typing import Union @@ -34,6 +34,12 @@ class MFSimulation(mfsimbase.MFSimulationBase): maxerrors : integer * maxerrors (integer) maximum number of errors that will be stored and printed. + print_input : boolean + * print_input (boolean) keyword to activate printing of simulation + input summaries to the simulation list file (mfsim.lst). With this + keyword, input summaries will be written for those packages that + support newer input data model routines. Not all packages are + supported yet by the newer input data model routines. tdis6 : string * tdis6 (string) is the name of the Temporal Discretization (TDIS) Input File. @@ -85,6 +91,7 @@ def __init__( nocheck=None, memory_print_option=None, maxerrors=None, + print_input=None, ): super().__init__( sim_name=sim_name, @@ -100,11 +107,13 @@ def __init__( self.name_file.nocheck.set_data(nocheck) self.name_file.memory_print_option.set_data(memory_print_option) self.name_file.maxerrors.set_data(maxerrors) + self.name_file.print_input.set_data(print_input) self.continue_ = self.name_file.continue_ self.nocheck = self.name_file.nocheck self.memory_print_option = self.name_file.memory_print_option self.maxerrors = self.name_file.maxerrors + self.print_input = self.name_file.print_input @classmethod def load( From 637632cfa3ddbd2629dfc257759d36b851ee0063 Mon Sep 17 00:00:00 2001 From: wpbonelli Date: Tue, 26 Sep 2023 15:36:59 -0400 Subject: [PATCH 064/101] fix(mbase): warn if duplicate pkgs or units (#1964) * consistent with MFSimulation verbosity defaults * use category=UserWarning for mbase.py warnings * closes #1817 --- autotest/test_mp7.py | 42 +++++++++++++++++++----------------------- flopy/mbase.py | 26 ++++++++++++++------------ 2 files changed, 33 insertions(+), 35 deletions(-) diff --git a/autotest/test_mp7.py b/autotest/test_mp7.py index 540b6b9c25..69622b777d 100644 --- a/autotest/test_mp7.py +++ b/autotest/test_mp7.py @@ -896,8 +896,7 @@ def test_pathline_plotting(function_tmpdir): @requires_exe("mf6", "mp7") -@pytest.mark.parametrize("verbose", [True, False]) -def test_mp7sim_replacement(function_tmpdir, capfd, verbose): +def test_mp7sim_replacement(function_tmpdir, capfd): mf6sim = Mp7Cases.mf6(function_tmpdir) mf6sim.write_simulation() mf6sim.run_simulation() @@ -908,7 +907,6 @@ def test_mp7sim_replacement(function_tmpdir, capfd, verbose): flowmodel=mf6sim.get_model(mf6sim.name), exe_name="mp7", model_ws=mf6sim.sim_path, - verbose=verbose, ) defaultiface6 = {"RCH": 6, "EVT": 6} mpbas = Modpath7Bas(mp, porosity=0.1, defaultiface=defaultiface6) @@ -928,27 +926,25 @@ def test_mp7sim_replacement(function_tmpdir, capfd, verbose): zones=Mp7Cases.zones, particlegroups=Mp7Cases.particlegroups, ) - # add a duplicate mp7sim package - mpsim = Modpath7Sim( - mp, - simulationtype="combined", - trackingdirection="forward", - weaksinkoption="pass_through", - weaksourceoption="pass_through", - budgetoutputoption="summary", - budgetcellnumbers=[1049, 1259], - traceparticledata=[1, 1000], - referencetime=[0, 0, 0.0], - stoptimeoption="extend", - timepointdata=[500, 1000.0], - zonedataoption="on", - zones=Mp7Cases.zones, - particlegroups=Mp7Cases.particlegroups, - ) - cap = capfd.readouterr() - msg = "Two packages of the same type" - assert verbose == (msg in cap.out) + # add a duplicate mp7sim package + with pytest.warns(UserWarning, match="Two packages of the same type"): + mpsim = Modpath7Sim( + mp, + simulationtype="combined", + trackingdirection="forward", + weaksinkoption="pass_through", + weaksourceoption="pass_through", + budgetoutputoption="summary", + budgetcellnumbers=[1049, 1259], + traceparticledata=[1, 1000], + referencetime=[0, 0, 0.0], + stoptimeoption="extend", + timepointdata=[500, 1000.0], + zonedataoption="on", + zones=Mp7Cases.zones, + particlegroups=Mp7Cases.particlegroups, + ) mp.write_input() success, buff = mp.run_model() diff --git a/flopy/mbase.py b/flopy/mbase.py index af711c5a2f..086b026d0c 100644 --- a/flopy/mbase.py +++ b/flopy/mbase.py @@ -94,7 +94,8 @@ def _resolve(exe_name): if exe_path is None: if forgive: warn( - f"The program {exe_name} does not exist or is not executable." + f"The program {exe_name} does not exist or is not executable.", + category=UserWarning, ) return None @@ -416,7 +417,8 @@ def __init__( except: warn( f"\n{model_ws} not valid, " - f"workspace-folder was changed to {os.getcwd()}\n" + f"workspace-folder was changed to {os.getcwd()}\n", + category=UserWarning, ) model_ws = os.getcwd() self._model_ws = str(model_ws) @@ -436,7 +438,7 @@ def __init__( self._start_datetime = kwargs.pop("start_datetime", "1-1-1970") if kwargs: - warnings.warn( + warn( f"unhandled keywords: {kwargs}", category=UserWarning, ) @@ -653,20 +655,20 @@ def add_package(self, p): pn = p.name[idx] except: pn = p.name - if self.verbose: - print( - f"\nWARNING:\n unit {u} of package {pn} already in use." - ) + warn( + f"Unit {u} of package {pn} already in use.", + category=UserWarning, + ) self.package_units.append(u) for i, pp in enumerate(self.packagelist): if pp.allowDuplicates: continue elif isinstance(p, type(pp)): - if self.verbose: - print( - "\nWARNING:\n Two packages of the same type, " - f"Replacing existing '{p.name[0]}' package." - ) + warn( + "Two packages of the same type, " + f"Replacing existing '{p.name[0]}' package.", + category=UserWarning, + ) self.packagelist[i] = p return if self.verbose: From 941a5f105b10f50fc71b14ea05abf499c4f65fa5 Mon Sep 17 00:00:00 2001 From: wpbonelli Date: Tue, 26 Sep 2023 21:23:23 -0400 Subject: [PATCH 065/101] refactor(recarray_utils): deprecate functions, use numpy builtins (#1960) --- .../Notebooks/modpath7_structured_example.py | 3 +- autotest/test_mp6.py | 7 ++-- flopy/utils/__init__.py | 2 +- flopy/utils/modpathfile.py | 7 ++-- flopy/utils/recarray_utils.py | 39 ++++++++++++------- 5 files changed, 33 insertions(+), 25 deletions(-) diff --git a/.docs/Notebooks/modpath7_structured_example.py b/.docs/Notebooks/modpath7_structured_example.py index ebf561190f..9f76f87e30 100644 --- a/.docs/Notebooks/modpath7_structured_example.py +++ b/.docs/Notebooks/modpath7_structured_example.py @@ -30,6 +30,7 @@ import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np +from numpy.lib.recfunctions import repack_fields # run installed version of flopy or add local path try: @@ -211,7 +212,7 @@ # Get locations to extract pathline data nodew = m.dis.get_node([wel_loc]) -riv_locs = flopy.utils.ra_slice(m.riv.stress_period_data[0], ["k", "i", "j"]) +riv_locs = repack_fields(m.riv.stress_period_data[0][["k", "i", "j"]]) nodesr = m.dis.get_node(riv_locs.tolist()) # Pathline data diff --git a/autotest/test_mp6.py b/autotest/test_mp6.py index 6f37de0369..545ffe70d1 100644 --- a/autotest/test_mp6.py +++ b/autotest/test_mp6.py @@ -3,11 +3,11 @@ import matplotlib.pyplot as plt import numpy as np -import pandas as pd import pytest from autotest.conftest import get_example_data_path from autotest.test_mp6_cases import Mp6Cases1, Mp6Cases2 -from modflow_devtools.markers import has_pkg, requires_exe, requires_pkg +from modflow_devtools.markers import requires_exe, requires_pkg +from numpy.lib.recfunctions import repack_fields from pytest_cases import parametrize_with_cases import flopy @@ -19,7 +19,6 @@ from flopy.plot import PlotMapView from flopy.utils import EndpointFile, PathlineFile, TimeseriesFile from flopy.utils.flopy_io import loadtxt -from flopy.utils.recarray_utils import ra_slice pytestmark = pytest.mark.mf6 @@ -171,7 +170,7 @@ def test_get_destination_data(function_tmpdir, mp6_test_path): # check that all starting locations are included in the pathline data # (pathline data slice not just endpoints) - starting_locs = ra_slice(well_epd, ["k0", "i0", "j0"]) + starting_locs = repack_fields(well_epd[["k0", "i0", "j0"]]) pathline_locs = np.array( np.array(well_pthld)[["k", "i", "j"]].tolist(), dtype=starting_locs.dtype, diff --git a/flopy/utils/__init__.py b/flopy/utils/__init__.py index ea5e9d933b..76ba6d6930 100644 --- a/flopy/utils/__init__.py +++ b/flopy/utils/__init__.py @@ -49,7 +49,7 @@ from .optionblock import OptionBlock from .postprocessing import get_specific_discharge, get_transmissivities from .rasters import Raster -from .recarray_utils import create_empty_recarray, ra_slice +from .recarray_utils import create_empty_recarray, ra_slice, recarray from .reference import TemporalReference from .sfroutputfile import SfrFile from .swroutputfile import ( diff --git a/flopy/utils/modpathfile.py b/flopy/utils/modpathfile.py index 7931abaf9c..76b8c6ed18 100644 --- a/flopy/utils/modpathfile.py +++ b/flopy/utils/modpathfile.py @@ -13,10 +13,9 @@ from typing import Union import numpy as np -from numpy.lib.recfunctions import append_fields, stack_arrays +from numpy.lib.recfunctions import append_fields, repack_fields, stack_arrays from ..utils.flopy_io import loadtxt -from ..utils.recarray_utils import ra_slice class _ModpathSeries: @@ -1161,7 +1160,7 @@ def get_destination_endpoint_data(self, dest_cells, source=False): else: keys = ["k", "i", "j"] try: - raslice = ra_slice(ra, keys) + raslice = repack_fields(ra[keys]) except (KeyError, ValueError): raise KeyError( "could not extract " @@ -1174,7 +1173,7 @@ def get_destination_endpoint_data(self, dest_cells, source=False): else: keys = ["node"] try: - raslice = ra_slice(ra, keys) + raslice = repack_fields(ra[keys]) except (KeyError, ValueError): msg = f"could not extract '{keys[0]}' key from endpoint data" raise KeyError(msg) diff --git a/flopy/utils/recarray_utils.py b/flopy/utils/recarray_utils.py index aa4b6464f3..0712eda8fc 100644 --- a/flopy/utils/recarray_utils.py +++ b/flopy/utils/recarray_utils.py @@ -1,4 +1,5 @@ import numpy as np +from numpy.lib.recfunctions import repack_fields def create_empty_recarray(length, dtype, default_value=0): @@ -22,10 +23,11 @@ def create_empty_recarray(length, dtype, default_value=0): Examples -------- >>> import numpy as np - >>> import flopy - >>> dtype = np.dtype([('x', np.float32), ('y', np.float32)]) - >>> ra = flopy.utils.create_empty_recarray(10, dtype) - + >>> from flopy.utils import create_empty_recarray + >>> dt = np.dtype([('x', np.float32), ('y', np.float32)]) + >>> create_empty_recarray(1, dt) + rec.array([(0., 0.)], + dtype=[('x', '>> import flopy - >>> raslice = flopy.utils.ra_slice(ra, ['x', 'y']) - - + >>> import numpy as np + >>> from flopy.utils import ra_slice + >>> a = np.core.records.fromrecords([("a", 1, 1.1), ("b", 2, 2.1)]) + >>> ra_slice(a, ['f0', 'f1']) + rec.array([('a', 1), ('b', 2)], + dtype=[('f0', '>> import numpy as np >>> import flopy - >>> dtype = np.dtype([('x', np.float32), ('y', np.float32)]) - >>> arr = [(1., 2.), (10., 20.), (100., 200.)] - >>> ra = flopy.utils.recarray(arr, dtype) - + >>> dt = np.dtype([('x', np.float32), ('y', np.float32)]) + >>> a = [(1., 2.), (10., 20.), (100., 200.)] + >>> flopy.utils.recarray(a, dt) + rec.array([( 1., 2.), ( 10., 20.), (100., 200.)], + dtype=[('x', ' Date: Thu, 28 Sep 2023 13:04:07 -0400 Subject: [PATCH 066/101] ci: add color=yes to addopts in pytest.ini (#1967) * configure once instead of on every invocation --- .github/workflows/benchmark.yml | 4 ++-- .github/workflows/commit.yml | 6 +++--- .github/workflows/examples.yml | 4 ++-- .github/workflows/mf6.yml | 4 ++-- .github/workflows/optional.yml | 2 +- .github/workflows/regression.yml | 4 ++-- .github/workflows/release.yml | 2 +- .github/workflows/rtd.yml | 2 +- autotest/pytest.ini | 2 +- 9 files changed, 15 insertions(+), 15 deletions(-) diff --git a/.github/workflows/benchmark.yml b/.github/workflows/benchmark.yml index eba412d900..e72415c424 100644 --- a/.github/workflows/benchmark.yml +++ b/.github/workflows/benchmark.yml @@ -69,7 +69,7 @@ jobs: working-directory: ./autotest run: | mkdir -p .benchmarks - pytest -v --color=yes --durations=0 --benchmark-only --benchmark-json .benchmarks/${{ matrix.os }}_python${{ matrix.python-version }}.json --keep-failed=.failed + pytest -v --durations=0 --benchmark-only --benchmark-json .benchmarks/${{ matrix.os }}_python${{ matrix.python-version }}.json --keep-failed=.failed env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} @@ -79,7 +79,7 @@ jobs: working-directory: ./autotest run: | mkdir -p .benchmarks - pytest -v --color=yes --durations=0 --benchmark-only --benchmark-json .benchmarks/${{ matrix.os }}_python${{ matrix.python-version }}.json --keep-failed=.failed + pytest -v --durations=0 --benchmark-only --benchmark-json .benchmarks/${{ matrix.os }}_python${{ matrix.python-version }}.json --keep-failed=.failed env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/commit.yml b/.github/workflows/commit.yml index dad0c098cf..5e5163f5ce 100644 --- a/.github/workflows/commit.yml +++ b/.github/workflows/commit.yml @@ -131,7 +131,7 @@ jobs: - name: Smoke test working-directory: autotest - run: pytest -v -n=auto --smoke --color=yes --cov=flopy --cov-report=xml --durations=0 --keep-failed=.failed + run: pytest -v -n=auto --smoke --cov=flopy --cov-report=xml --durations=0 --keep-failed=.failed env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} @@ -226,7 +226,7 @@ jobs: if: runner.os != 'Windows' working-directory: ./autotest run: | - pytest -v -m="not example and not regression" -n=auto --color=yes --cov=flopy --cov-append --cov-report=xml --durations=0 --keep-failed=.failed --dist loadfile + pytest -v -m="not example and not regression" -n=auto --cov=flopy --cov-append --cov-report=xml --durations=0 --keep-failed=.failed --dist loadfile coverage report env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} @@ -236,7 +236,7 @@ jobs: shell: bash -l {0} working-directory: ./autotest run: | - pytest -v -m="not example and not regression" -n=auto --color=yes --cov=flopy --cov-append --cov-report=xml --durations=0 --keep-failed=.failed --dist loadfile + pytest -v -m="not example and not regression" -n=auto --cov=flopy --cov-append --cov-report=xml --durations=0 --keep-failed=.failed --dist loadfile coverage report env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/examples.yml b/.github/workflows/examples.yml index e0b4fb061c..fe9b014b3b 100644 --- a/.github/workflows/examples.yml +++ b/.github/workflows/examples.yml @@ -97,7 +97,7 @@ jobs: if: runner.os != 'Windows' working-directory: ./autotest run: | - pytest -v -m="example" -n=auto -s --color=yes --durations=0 --keep-failed=.failed + pytest -v -m="example" -n=auto -s --durations=0 --keep-failed=.failed env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} @@ -106,7 +106,7 @@ jobs: shell: bash -l {0} working-directory: ./autotest run: | - pytest -v -m="example" -n=auto -s --color=yes --durations=0 --keep-failed=.failed + pytest -v -m="example" -n=auto -s --durations=0 --keep-failed=.failed env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/mf6.yml b/.github/workflows/mf6.yml index 445eebf180..e1709ba687 100644 --- a/.github/workflows/mf6.yml +++ b/.github/workflows/mf6.yml @@ -82,12 +82,12 @@ jobs: env: GITHUB_TOKEN: ${{ github.token }} run: | - pytest -v --color=yes --durations=0 get_exes.py + pytest -v --durations=0 get_exes.py - name: Run tests working-directory: modflow6/autotest run: | - pytest -v --color=yes --cov=flopy --cov-report=xml --durations=0 -k "test_gw" -n auto + pytest -v --cov=flopy --cov-report=xml --durations=0 -k "test_gw" -n auto - name: Print coverage report before upload working-directory: ./modflow6/autotest diff --git a/.github/workflows/optional.yml b/.github/workflows/optional.yml index c3081a7ab1..ee29136542 100644 --- a/.github/workflows/optional.yml +++ b/.github/workflows/optional.yml @@ -62,7 +62,7 @@ jobs: - name: Smoke test (${{ matrix.optdeps }}) working-directory: autotest - run: pytest -v -n=auto --smoke --color=yes --cov=flopy --cov-report=xml --durations=0 --keep-failed=.failed + run: pytest -v -n=auto --smoke --cov=flopy --cov-report=xml --durations=0 --keep-failed=.failed env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/regression.yml b/.github/workflows/regression.yml index 097ff9efa0..972f1e1650 100644 --- a/.github/workflows/regression.yml +++ b/.github/workflows/regression.yml @@ -80,7 +80,7 @@ jobs: - name: Run regression tests if: runner.os != 'Windows' working-directory: ./autotest - run: pytest -v -m="regression" -n=auto --color=yes --durations=0 --keep-failed=.failed + run: pytest -v -m="regression" -n=auto --durations=0 --keep-failed=.failed env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} @@ -88,7 +88,7 @@ jobs: if: runner.os == 'Windows' shell: bash -l {0} working-directory: ./autotest - run: pytest -v -m="regression" -n=auto --color=yes --durations=0 --keep-failed=.failed + run: pytest -v -m="regression" -n=auto --durations=0 --keep-failed=.failed env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml index bcd0af801d..e130f3a337 100644 --- a/.github/workflows/release.yml +++ b/.github/workflows/release.yml @@ -71,7 +71,7 @@ jobs: - name: Run tests working-directory: autotest - run: pytest -v -m="not example and not regression" -n=auto --color=yes --durations=0 --keep-failed=.failed --dist loadfile + run: pytest -v -m="not example and not regression" -n=auto --durations=0 --keep-failed=.failed --dist loadfile env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/rtd.yml b/.github/workflows/rtd.yml index 76d4f1b43b..b092e4b16e 100644 --- a/.github/workflows/rtd.yml +++ b/.github/workflows/rtd.yml @@ -79,7 +79,7 @@ jobs: - name: Run tutorial and example notebooks working-directory: autotest - run: pytest -v -n auto --color=yes test_notebooks.py + run: pytest -v -n auto test_notebooks.py - name: Upload notebooks artifact for ReadtheDocs if: github.repository_owner == 'modflowpy' && github.event_name == 'push' && runner.os == 'Linux' diff --git a/autotest/pytest.ini b/autotest/pytest.ini index 97931bb22a..8f4616fd37 100644 --- a/autotest/pytest.ini +++ b/autotest/pytest.ini @@ -1,5 +1,5 @@ [pytest] -addopts = -ra +addopts = -ra --color=yes python_files = test_*.py profile_*.py From 158d2d69fdfdd2cefbcdaa1a403f9103f51d2d70 Mon Sep 17 00:00:00 2001 From: wpbonelli Date: Fri, 29 Sep 2023 14:25:13 -0400 Subject: [PATCH 067/101] fix(get_structured_faceflows): cover edge cases, expand tests (#1968) * reproduce and fix provided 2D model * stub simpler parametrized 2D tests * add freyberg mf6/mf2005 comparison * move get_face into get_structured_faceflows * remove og repro (covered by parametrized test) --- autotest/test_postprocessing.py | 188 ++++++++++++++++++++++++++---- flopy/mf6/utils/postprocessing.py | 109 +++++++++++++---- 2 files changed, 251 insertions(+), 46 deletions(-) diff --git a/autotest/test_postprocessing.py b/autotest/test_postprocessing.py index 4812ac9c02..fd49f03834 100644 --- a/autotest/test_postprocessing.py +++ b/autotest/test_postprocessing.py @@ -4,6 +4,7 @@ import pytest from modflow_devtools.markers import requires_exe +import flopy from flopy.mf6 import ( MFSimulation, ModflowGwf, @@ -26,51 +27,190 @@ ) -@pytest.fixture(scope="module") +@pytest.fixture +def mf2005_freyberg_path(example_data_path): + return example_data_path / "freyberg" + + +@pytest.fixture def mf6_freyberg_path(example_data_path): return example_data_path / "mf6-freyberg" +@pytest.mark.parametrize( + "nlay, nrow, ncol", + [ + # extended in 1 dimension + [3, 1, 1], + [1, 3, 1], + [1, 1, 3], + # 2D + [3, 3, 1], + [1, 3, 3], + [3, 1, 3], + # 3D + [3, 3, 3], + ], +) @pytest.mark.mf6 @requires_exe("mf6") -def test_faceflows(function_tmpdir, mf6_freyberg_path): +def test_get_structured_faceflows(function_tmpdir, nlay, nrow, ncol): + name = "gsff" + sim = flopy.mf6.MFSimulation( + sim_name=name, exe_name="mf6", version="mf6", sim_ws=function_tmpdir + ) + + # tdis + tdis = flopy.mf6.ModflowTdis( + sim, + nper=1, + perioddata=[(1.0, 1, 1.0)], + ) + + # gwf + gwf = flopy.mf6.ModflowGwf( + sim, + modelname=name, + model_nam_file="{}.nam".format(name), + save_flows=True, + ) + + # dis + botm = ( + np.ones((nlay, nrow, ncol)) + * np.arange(nlay - 1, -1, -1)[:, np.newaxis, np.newaxis] + ) + dis = flopy.mf6.ModflowGwfdis( + gwf, + nlay=nlay, + nrow=nrow, + ncol=ncol, + top=nlay, + botm=botm, + ) + + # initial conditions + h0 = nlay * 2 + start = h0 * np.ones((nlay, nrow, ncol)) + ic = flopy.mf6.ModflowGwfic(gwf, pname="ic", strt=start) + + # constant head + chd_rec = [] + max_dim = max(nlay, nrow, ncol) + h = np.linspace(11, 13, max_dim) + iface = 6 # top + for i in range(0, max_dim): + # ((layer,row,col),head,iface) + cell_id = ( + (0, 0, i) if ncol > 1 else (0, i, 0) if nrow > 1 else (i, 0, 0) + ) + chd_rec.append((cell_id, h[i], iface)) + chd = flopy.mf6.ModflowGwfchd( + gwf, + auxiliary=[("iface",)], + stress_period_data=chd_rec, + print_input=True, + print_flows=True, + save_flows=True, + ) + + # node property flow + npf = flopy.mf6.ModflowGwfnpf(gwf, save_specific_discharge=True) + + # output control + budgetfile = "{}.cbb".format(name) + budget_filerecord = [budgetfile] + saverecord = [("BUDGET", "ALL")] + oc = flopy.mf6.ModflowGwfoc( + gwf, + saverecord=saverecord, + budget_filerecord=budget_filerecord, + ) + + # solver + ims = flopy.mf6.ModflowIms(sim) + + # write and run the model + sim.write_simulation() + sim.check() + success, buff = sim.run_simulation() + assert success + + # load budget output + budget = gwf.output.budget() + flow_ja_face = budget.get_data(text="FLOW-JA-FACE")[0] + frf, fff, flf = get_structured_faceflows( + flow_ja_face, + grb_file=function_tmpdir / f"{gwf.name}.dis.grb", + verbose=True, + ) + + # expect nonzero flows only in extended (>1 cell) dimensions + assert np.any(frf) == (ncol > 1) + assert np.any(fff) == (nrow > 1) + assert np.any(flf) == (nlay > 1) + + +@pytest.mark.mf6 +@requires_exe("mf6") +def test_get_structured_faceflows_freyberg( + function_tmpdir, mf2005_freyberg_path, mf6_freyberg_path +): + # create workspaces + mf6_ws = function_tmpdir / "mf6" + mf2005_ws = function_tmpdir / "mf2005" + + # run freyberg mf6 sim = MFSimulation.load( sim_name="freyberg", exe_name="mf6", sim_ws=mf6_freyberg_path, ) - - # change the simulation workspace - sim.set_sim_path(function_tmpdir) - - # write the model simulation files + sim.set_sim_path(mf6_ws) sim.write_simulation() - - # run the simulation sim.run_simulation() - # get output + # get freyberg mf6 output and compute structured faceflows gwf = sim.get_model("freyberg") - head = gwf.output.head().get_data() - cbc = gwf.output.budget() + mf6_head = gwf.output.head().get_data() + mf6_cbc = gwf.output.budget() + mf6_spdis = mf6_cbc.get_data(text="DATA-SPDIS")[0] + mf6_flowja = mf6_cbc.get_data(text="FLOW-JA-FACE")[0] + mf6_frf, mf6_fff, mf6_flf = get_structured_faceflows( + mf6_flowja, + grb_file=mf6_ws / "freyberg.dis.grb", + ) + assert mf6_frf.shape == mf6_fff.shape == mf6_flf.shape == mf6_head.shape + assert not np.any(mf6_flf) # only 1 layer + + # run freyberg mf2005 + model = Modflow.load("freyberg", model_ws=mf2005_freyberg_path) + model.change_model_ws(mf2005_ws) + model.write_input() + model.run_model() + + # get freyberg mf2005 output + mf2005_cbc = flopy.utils.CellBudgetFile(mf2005_ws / "freyberg.cbc") + mf2005_frf, mf2005_fff = ( + mf2005_cbc.get_data(text="FLOW RIGHT FACE", full3D=True)[0], + mf2005_cbc.get_data(text="FLOW FRONT FACE", full3D=True)[0], + ) - spdis = cbc.get_data(text="DATA-SPDIS")[0] - flowja = cbc.get_data(text="FLOW-JA-FACE")[0] + # compare mf2005 faceflows with converted mf6 faceflows + assert mf2005_frf.shape == mf2005_fff.shape == mf6_head.shape + assert np.allclose(mf6_frf, np.flip(mf2005_frf, 0), atol=1e-3) + assert np.allclose(mf6_fff, np.flip(mf2005_fff, 0), atol=1e-3) - frf, fff, flf = get_structured_faceflows( - flowja, - grb_file=function_tmpdir / "freyberg.dis.grb", - ) Qx, Qy, Qz = get_specific_discharge( - (frf, fff, flf), + (mf6_frf, mf6_fff, mf6_flf), gwf, ) sqx, sqy, sqz = get_specific_discharge( - (frf, fff, flf), + (mf6_frf, mf6_fff, mf6_flf), gwf, - head=head, + head=mf6_head, ) - qx, qy, qz = get_specific_discharge(spdis, gwf) + qx, qy, qz = get_specific_discharge(mf6_spdis, gwf) fig = plt.figure(figsize=(12, 6), constrained_layout=True) ax = fig.add_subplot(1, 3, 1, aspect="equal") @@ -88,6 +228,7 @@ def test_faceflows(function_tmpdir, mf6_freyberg_path): q1 = mm.plot_vector(qx, qy) assert isinstance(q1, matplotlib.quiver.Quiver) + # plt.show() plt.close("all") # uv0 = np.column_stack((q0.U, q0.V)) @@ -96,7 +237,6 @@ def test_faceflows(function_tmpdir, mf6_freyberg_path): # assert ( # np.allclose(uv0, uv1) # ), "get_faceflows quivers are not equal to specific discharge vectors" - return @pytest.mark.mf6 @@ -149,7 +289,7 @@ def test_flowja_residuals(function_tmpdir, mf6_freyberg_path): @pytest.mark.mf6 @requires_exe("mf6") -def test_structured_faceflows_3d(function_tmpdir): +def test_structured_faceflows_3d_shape(function_tmpdir): name = "mymodel" sim = MFSimulation(sim_name=name, sim_ws=function_tmpdir, exe_name="mf6") tdis = ModflowTdis(sim) diff --git a/flopy/mf6/utils/postprocessing.py b/flopy/mf6/utils/postprocessing.py index df82997802..a5b00a9b47 100644 --- a/flopy/mf6/utils/postprocessing.py +++ b/flopy/mf6/utils/postprocessing.py @@ -4,7 +4,14 @@ def get_structured_faceflows( - flowja, grb_file=None, ia=None, ja=None, verbose=False + flowja, + grb_file=None, + ia=None, + ja=None, + nlay=None, + nrow=None, + ncol=None, + verbose=False, ): """ Get the face flows for the flow right face, flow front face, and @@ -22,6 +29,12 @@ def get_structured_faceflows( CRS row pointers. Only required if grb_file is not provided. ja : list or ndarray CRS column pointers. Only required if grb_file is not provided. + nlay : int + number of layers in the grid. Only required if grb_file is not provided. + nrow : int + number of rows in the grid. Only required if grb_file is not provided. + ncol : int + number of columns in the grid. Only required if grb_file is not provided. verbose: bool Write information to standard output @@ -43,11 +56,19 @@ def get_structured_faceflows( "is only for structured DIS grids" ) ia, ja = grb.ia, grb.ja + nlay, nrow, ncol = grb.nlay, grb.nrow, grb.ncol else: - if ia is None or ja is None: + if ( + ia is None + or ja is None + or nlay is None + or nrow is None + or ncol is None + ): raise ValueError( - "ia and ja arrays must be specified if the MODFLOW 6" - "binary grid file name is not specified." + "ia, ja, nlay, nrow, and ncol must be" + "specified if a MODFLOW 6 binary grid" + "file name is not specified." ) # flatten flowja, if necessary @@ -57,27 +78,71 @@ def get_structured_faceflows( # evaluate size of flowja relative to ja __check_flowja_size(flowja, ja) - # create face flow arrays - shape = (grb.nlay, grb.nrow, grb.ncol) - frf = np.zeros(shape, dtype=float).flatten() - fff = np.zeros(shape, dtype=float).flatten() - flf = np.zeros(shape, dtype=float).flatten() + # create empty flat face flow arrays + shape = (nlay, nrow, ncol) + frf = np.zeros(shape, dtype=float).flatten() # right + fff = np.zeros(shape, dtype=float).flatten() # front + flf = np.zeros(shape, dtype=float).flatten() # lower + + def get_face(m, n, nlay, nrow, ncol): + """ + Determine connection direction at (m, n) + in a connection or intercell flow matrix. + + Notes + ----- + For visual intuition in 2 dimensions + https://stackoverflow.com/a/16330162/6514033 + helps. MODFLOW uses the left-side scheme in 3D. + + Parameters + ---------- + m : int + row index + n : int + column index + nlay : int + number of layers in the grid + nrow : int + number of rows in the grid + ncol : int + number of columns in the grid + + Returns + ------- + face : int + 0: right, 1: front, 2: lower + """ + + d = m - n + if d == 1: + # handle 1D cases + if nrow == 1 and ncol == 1: + return 2 + elif nlay == 1 and ncol == 1: + return 1 + elif nlay == 1 and nrow == 1: + return 0 + else: + # handle 2D layers/rows case + return 1 if ncol == 1 else 0 + elif d == nrow * ncol: + return 2 + else: + return 1 - # fill flow terms - vmult = [-1.0, -1.0, -1.0] + # fill right, front and lower face flows + # (below main diagonal) flows = [frf, fff, flf] for n in range(grb.nodes): - i0, i1 = ia[n] + 1, ia[n + 1] - for j in range(i0, i1): - jcol = ja[j] - if jcol > n: - if jcol == n + 1: - ipos = 0 - elif jcol == n + grb.ncol: - ipos = 1 - else: - ipos = 2 - flows[ipos][n] = vmult[ipos] * flowja[j] + for i in range(ia[n] + 1, ia[n + 1]): + m = ja[i] + if m <= n: + continue + face = get_face(m, n, nlay, nrow, ncol) + flows[face][n] = -1 * flowja[i] + + # reshape and return return frf.reshape(shape), fff.reshape(shape), flf.reshape(shape) From 4b105e866e4c58380415a25d131e9d96fc82a4ba Mon Sep 17 00:00:00 2001 From: wpbonelli Date: Sat, 30 Sep 2023 08:07:06 -0400 Subject: [PATCH 068/101] fix(CellBudgetFile): detect compact fmt by negative nlay (#1966) * previously omitted totim=0 if compact * add mf6 test case courtesy of mbakker * test mf2005 with compact/non-compact --- autotest/test_binaryfile.py | 190 +++++++++++++++++++++++++++++++++--- flopy/utils/binaryfile.py | 8 +- 2 files changed, 180 insertions(+), 18 deletions(-) diff --git a/autotest/test_binaryfile.py b/autotest/test_binaryfile.py index d0d5925205..b5cfc65bc0 100644 --- a/autotest/test_binaryfile.py +++ b/autotest/test_binaryfile.py @@ -1,13 +1,14 @@ import os +from itertools import repeat import numpy as np import pytest from autotest.conftest import get_example_data_path from matplotlib import pyplot as plt from matplotlib.axes import Axes +from modflow_devtools.markers import requires_exe -from flopy.mf6.modflow import MFSimulation -from flopy.modflow import Modflow +import flopy from flopy.utils import ( BinaryHeader, CellBudgetFile, @@ -41,7 +42,9 @@ def zonbud_model_path(example_data_path): def test_binaryfile_writeread(function_tmpdir, nwt_model_path): model = "Pr3_MFNWT_lower.nam" - ml = Modflow.load(model, version="mfnwt", model_ws=nwt_model_path) + ml = flopy.modflow.Modflow.load( + model, version="mfnwt", model_ws=nwt_model_path + ) # change the model work space ml.change_model_ws(function_tmpdir) # @@ -199,9 +202,6 @@ def test_headufile_get_ts(example_data_path): heads.get_ts([i, i + 1, i + 2]) -# precision - - def test_get_headfile_precision(example_data_path): precision = get_headfile_precision( example_data_path / "freyberg" / "freyberg.githds" @@ -254,9 +254,6 @@ def test_budgetfile_detect_precision_double(path): assert file.realtype == np.float64 -# write - - def test_write_head(function_tmpdir): file_path = function_tmpdir / "headfile" head_data = np.random.random((10, 10)) @@ -294,9 +291,6 @@ def test_write_budget(function_tmpdir): assert np.array_equal(content1, content2) -# read - - def test_binaryfile_read(function_tmpdir, freyberg_model_path): h = HeadFile(freyberg_model_path / "freyberg.githds") assert isinstance(h, HeadFile) @@ -347,13 +341,10 @@ def test_binaryfile_read_context(freyberg_model_path): assert str(e.value) == "seek of closed file", str(e.value) -# reverse - - def test_headfile_reverse_mf6(example_data_path, function_tmpdir): # load simulation and extract tdis sim_name = "test006_gwf3" - sim = MFSimulation.load( + sim = flopy.mf6.MFSimulation.load( sim_name=sim_name, sim_ws=example_data_path / "mf6" / sim_name ) tdis = sim.get_package("tdis") @@ -403,3 +394,170 @@ def test_headfile_reverse_mf6(example_data_path, function_tmpdir): assert f_datum == rf_datum else: assert np.array_equal(f_data[0][0], rf_data[0][0]) + + +@pytest.fixture +@pytest.mark.mf6 +@requires_exe("mf6") +def mf6_gwf_2sp_st_tr(function_tmpdir): + """ + A basic flow model with 2 stress periods, + first steady-state, the second transient. + """ + + name = "mf6_gwf_2sp" + sim = flopy.mf6.MFSimulation( + sim_name=name, + version="mf6", + exe_name="mf6", + sim_ws=function_tmpdir, + ) + + tdis = flopy.mf6.ModflowTdis( + simulation=sim, + nper=2, + perioddata=[(0, 1, 1), (10, 10, 1)], + ) + + ims = flopy.mf6.ModflowIms( + simulation=sim, + complexity="SIMPLE", + ) + + gwf = flopy.mf6.ModflowGwf( + simulation=sim, + modelname=name, + save_flows=True, + ) + + dis = flopy.mf6.ModflowGwfdis( + model=gwf, nlay=1, nrow=1, ncol=10, delr=1, delc=10, top=10, botm=0 + ) + + npf = flopy.mf6.ModflowGwfnpf( + model=gwf, + icelltype=[0], + k=10, + ) + + ic = flopy.mf6.ModflowGwfic( + model=gwf, + strt=0, + ) + + wel = flopy.mf6.ModflowGwfwel( + model=gwf, + stress_period_data={0: 0, 1: [[(0, 0, 0), -1]]}, + ) + + sto = flopy.mf6.ModflowGwfsto( + model=gwf, + ss=1e-4, + steady_state={0: True}, + transient={1: True}, + ) + + chd = flopy.mf6.ModflowGwfchd( + model=gwf, + stress_period_data={0: [[(0, 0, 9), 0]]}, + ) + + oc = flopy.mf6.ModflowGwfoc( + model=gwf, + budget_filerecord=f"{name}.cbc", + head_filerecord=f"{name}.hds", + saverecord=[("HEAD", "ALL"), ("BUDGET", "ALL")], + ) + + return sim + + +def test_read_mf6_2sp(mf6_gwf_2sp_st_tr): + sim = mf6_gwf_2sp_st_tr + gwf = sim.get_model() + sim.write_simulation(silent=False) + success, _ = sim.run_simulation(silent=False) + assert success + + # load heads and flows + hds = gwf.output.head() + cbb = gwf.output.budget() + + # check times + exp_times = [float(t) for t in range(11)] + assert hds.get_times() == exp_times + assert cbb.get_times() == exp_times + + # check stress periods and time steps + exp_kstpkper = [(0, 0)] + [(i, 1) for i in range(10)] + assert hds.get_kstpkper() == exp_kstpkper + assert cbb.get_kstpkper() == exp_kstpkper + + # check head data access by time + exp_hds_data = np.array([[list(repeat(0.0, 10))]]) + hds_data = hds.get_data(totim=0) + assert np.array_equal(hds_data, exp_hds_data) + + # check budget file data by time + cbb_data = cbb.get_data(totim=0) + assert len(cbb_data) > 0 + + # check head data access by kstp and kper + hds_data = hds.get_data(kstpkper=(0, 0)) + assert np.array_equal(hds_data, exp_hds_data) + + # check budget file data by kstp and kper + cbb_data_kstpkper = cbb.get_data(kstpkper=(0, 0)) + assert len(cbb_data) == len(cbb_data_kstpkper) + for i in range(len(cbb_data)): + assert np.array_equal(cbb_data[i], cbb_data_kstpkper[i]) + + +@pytest.mark.parametrize("compact", [True, False]) +def test_read_mf2005_freyberg(example_data_path, function_tmpdir, compact): + m = flopy.modflow.Modflow.load( + example_data_path / "freyberg" / "freyberg.nam", + ) + m.change_model_ws(function_tmpdir) + oc = m.get_package("OC") + oc.compact = compact + + m.write_input() + success, buff = m.run_model(silent=False) + assert success + + # load heads and flows + hds_file = function_tmpdir / "freyberg.hds" + cbb_file = function_tmpdir / "freyberg.cbc" + assert hds_file.is_file() + assert cbb_file.is_file() + hds = HeadFile(hds_file) + cbb = CellBudgetFile(cbb_file, model=m) # failing to specify a model... + + # check times + exp_times = [10.0] + assert hds.get_times() == exp_times + assert cbb.get_times() == exp_times # ...causes get_times() to be empty + + # check stress periods and time steps + exp_kstpkper = [(0, 0)] + assert hds.get_kstpkper() == exp_kstpkper + assert cbb.get_kstpkper() == exp_kstpkper + + # check head data access by time + hds_data_totim = hds.get_data(totim=exp_times[0]) + assert hds_data_totim.shape == (1, 40, 20) + + # check budget file data by time + cbb_data = cbb.get_data(totim=exp_times[0]) + assert len(cbb_data) > 0 + + # check head data access by kstp and kper + hds_data_kstpkper = hds.get_data(kstpkper=(0, 0)) + assert np.array_equal(hds_data_kstpkper, hds_data_totim) + + # check budget file data by kstp and kper + cbb_data_kstpkper = cbb.get_data(kstpkper=(0, 0)) + assert len(cbb_data) == len(cbb_data_kstpkper) + for i in range(len(cbb_data)): + assert np.array_equal(cbb_data[i], cbb_data_kstpkper[i]) diff --git a/flopy/utils/binaryfile.py b/flopy/utils/binaryfile.py index 9bd90b90d0..cf384fa359 100644 --- a/flopy/utils/binaryfile.py +++ b/flopy/utils/binaryfile.py @@ -1014,6 +1014,7 @@ def __init__( self.imethlist = [] self.paknamlist = [] self.nrecords = 0 + self.compact = True # compact budget file flag self.dis = None self.modelgrid = None @@ -1190,7 +1191,9 @@ def _build_index(self): header = self._get_header() self.nrecords += 1 totim = header["totim"] - if totim == 0: + # if old-style (non-compact) file, + # compute totim from kstp and kper + if not self.compact: totim = self._totim_from_kstpkper( (header["kstp"] - 1, header["kper"] - 1) ) @@ -1343,7 +1346,8 @@ def _get_header(self): """ header1 = binaryread(self.file, self.header1_dtype, (1,)) nlay = header1["nlay"] - if nlay < 0: + self.compact = bool(nlay < 0) + if self.compact: # fill header2 by first reading imeth, delt, pertim and totim # and then adding modelnames and paknames if imeth = 6 temp = binaryread(self.file, self.header2_dtype0, (1,)) From 1197094b850cf148678809e61b065b4fba3d8010 Mon Sep 17 00:00:00 2001 From: wpbonelli Date: Sat, 30 Sep 2023 10:38:30 -0400 Subject: [PATCH 069/101] test(generate_classes): skip on master and release branches (#1971) --- autotest/test_generate_classes.py | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/autotest/test_generate_classes.py b/autotest/test_generate_classes.py index cfbd4bd7a5..cff9a37f31 100644 --- a/autotest/test_generate_classes.py +++ b/autotest/test_generate_classes.py @@ -7,6 +7,9 @@ from warnings import warn import pytest +from modflow_devtools.misc import get_current_branch + +branch = get_current_branch() def nonempty(itr: Iterable): @@ -56,6 +59,10 @@ def pytest_generate_tests(metafunc): @pytest.mark.mf6 @pytest.mark.slow @pytest.mark.regression +@pytest.mark.skipif( + branch == "master" or branch.startswith("v"), + reason="skip on master and release branches", +) def test_generate_classes_from_github_refs( request, virtualenv, project_root_path, ref, worker_id ): From 1eddf86331e46db564749a4a7fde260ebae6c7b5 Mon Sep 17 00:00:00 2001 From: wpbonelli Date: Wed, 4 Oct 2023 08:18:09 -0400 Subject: [PATCH 070/101] docs: use html for notebook download links (#1976) --- .docs/conf.py | 44 ++++++++++++++++++++++++++++++++------------ .docs/prolog.rst | 10 ---------- 2 files changed, 32 insertions(+), 22 deletions(-) delete mode 100644 .docs/prolog.rst diff --git a/.docs/conf.py b/.docs/conf.py index dd3b87f92c..0113ff54c1 100644 --- a/.docs/conf.py +++ b/.docs/conf.py @@ -23,15 +23,8 @@ # -- determine if running on readthedocs ------------------------------------ on_rtd = os.environ.get("READTHEDOCS") == "True" -# -- determine if this version is a release candidate -with open("../README.md") as f: - lines = f.readlines() -rc_text = "" -for line in lines: - if line.startswith("### Version"): - if "release candidate" in line: - rc_text = "release candidate" - break +# -- determine if this is a development or release version ------------------ +branch_or_version = __version__ if "dev" not in __version__ else "develop" # -- get authors with open("../CITATION.cff") as f: @@ -45,8 +38,6 @@ for line in lines: if line.startswith("**Documentation for version"): line = f"**Documentation for version {__version__}" - if rc_text != "": - line += f" --- {rc_text}" line += "**\n" f.write(line) @@ -272,6 +263,35 @@ nbsphinx_prolog = ( r""" {% set docname = env.doc2path(env.docname, base=None) %} + +.. raw:: html + +
+ This page was generated from + {{ env.docname.split('/')|last|e + '.py' }}. + It's also available as a notebook. + +
""" - + Path("prolog.rst").read_text() ) + +# Import Matplotlib to avoid this message in notebooks: +# "Matplotlib is building the font cache; this may take a moment." +import matplotlib.pyplot diff --git a/.docs/prolog.rst b/.docs/prolog.rst deleted file mode 100644 index 3efa2ef7d0..0000000000 --- a/.docs/prolog.rst +++ /dev/null @@ -1,10 +0,0 @@ -.. only:: html - - .. role:: raw-html(raw) - :format: html - - .. note:: - - | This page was generated from `{{ docname[:-6] }}`__. Download as :download:`a Jupyter notebook (.ipynb) <../{{ docname }}>` or :download:`Python script (.py) <../{{ docname[:-6] }}.py>`. - - __ https://github.com/modflowpy/flopy/blob/develop/.docs/{{ docname[:-6] }}.py \ No newline at end of file From f82fdf6be34770e0e98567c70eac30d34a525794 Mon Sep 17 00:00:00 2001 From: scottrp <45947939+scottrp@users.noreply.github.com> Date: Wed, 4 Oct 2023 09:39:14 -0700 Subject: [PATCH 071/101] feat(pandas support) (#1955) --- .../modpath7_structured_transient_example.py | 22 +- autotest/regression/test_mf6.py | 2 +- autotest/regression/test_mf6_pandas.py | 353 +++ autotest/test_mf6.py | 4 +- autotest/test_model_splitter.py | 14 +- flopy/mf6/data/dfn/gwf-chd.dfn | 1 + flopy/mf6/data/dfn/gwf-drn.dfn | 1 + flopy/mf6/data/dfn/gwf-evt.dfn | 1 + flopy/mf6/data/dfn/gwf-evta.dfn | 1 + flopy/mf6/data/dfn/gwf-ghb.dfn | 1 + flopy/mf6/data/dfn/gwf-lak.dfn | 1 + flopy/mf6/data/dfn/gwf-maw.dfn | 1 + flopy/mf6/data/dfn/gwf-rch.dfn | 1 + flopy/mf6/data/dfn/gwf-rcha.dfn | 1 + flopy/mf6/data/dfn/gwf-riv.dfn | 1 + flopy/mf6/data/dfn/gwf-sfr.dfn | 1 + flopy/mf6/data/dfn/gwf-uzf.dfn | 1 + flopy/mf6/data/dfn/gwf-wel.dfn | 1 + flopy/mf6/data/mfdata.py | 40 +- flopy/mf6/data/mfdataarray.py | 1 - flopy/mf6/data/mfdatalist.py | 66 +- flopy/mf6/data/mfdataplist.py | 2499 +++++++++++++++++ flopy/mf6/data/mfdatascalar.py | 1 - flopy/mf6/data/mfdatastorage.py | 150 +- flopy/mf6/data/mfdatautil.py | 210 +- flopy/mf6/data/mffileaccess.py | 7 +- flopy/mf6/data/mfstructure.py | 40 +- flopy/mf6/mfpackage.py | 243 +- flopy/mf6/mfsimbase.py | 10 + flopy/mf6/modflow/mfgwfchd.py | 7 +- flopy/mf6/modflow/mfgwfdrn.py | 7 +- flopy/mf6/modflow/mfgwfevt.py | 7 +- flopy/mf6/modflow/mfgwfevta.py | 7 +- flopy/mf6/modflow/mfgwfghb.py | 7 +- flopy/mf6/modflow/mfgwflak.py | 5 +- flopy/mf6/modflow/mfgwfmaw.py | 5 +- flopy/mf6/modflow/mfgwfrch.py | 7 +- flopy/mf6/modflow/mfgwfrcha.py | 7 +- flopy/mf6/modflow/mfgwfriv.py | 7 +- flopy/mf6/modflow/mfgwfsfr.py | 5 +- flopy/mf6/modflow/mfgwfuzf.py | 5 +- flopy/mf6/modflow/mfgwfwel.py | 7 +- flopy/mf6/modflow/mfsimulation.py | 7 +- flopy/mf6/utils/createpackages.py | 17 +- flopy/mf6/utils/model_splitter.py | 41 +- 45 files changed, 3453 insertions(+), 370 deletions(-) create mode 100644 autotest/regression/test_mf6_pandas.py create mode 100644 flopy/mf6/data/mfdataplist.py diff --git a/.docs/Notebooks/modpath7_structured_transient_example.py b/.docs/Notebooks/modpath7_structured_transient_example.py index 717b77a3e0..fb76ebff6b 100644 --- a/.docs/Notebooks/modpath7_structured_transient_example.py +++ b/.docs/Notebooks/modpath7_structured_transient_example.py @@ -6,15 +6,25 @@ # extension: .py # format_name: light # format_version: '1.5' -# jupytext_version: 1.14.5 +# jupytext_version: 1.14.4 # kernelspec: -# display_name: Python 3 +# display_name: Python 3 (ipykernel) # language: python # name: python3 +# language_info: +# codemirror_mode: +# name: ipython +# version: 3 +# file_extension: .py +# mimetype: text/x-python +# name: python +# nbconvert_exporter: python +# pygments_lexer: ipython3 +# version: 3.9.12 # metadata: -# section: modpath # authors: -# - name: Wes Bonelli +# - name: Wes Bonelli +# section: modpath # --- # # Using MODPATH 7 with structured grids (transient example) @@ -228,7 +238,7 @@ def no_flow(w): [drain[0], drain[1], i + drain[2][0], 322.5, 100000.0, 6] for i in range(drain[2][1] - drain[2][0]) ] -drn = flopy.mf6.modflow.mfgwfdrn.ModflowGwfdrn(gwf, stress_period_data={0: dd}) +drn = flopy.mf6.modflow.mfgwfdrn.ModflowGwfdrn(gwf, auxiliary=["IFACE"], stress_period_data={0: dd}) # output control headfile = f"{sim_name}.hds" @@ -400,3 +410,5 @@ def add_legend(ax): temp_dir.cleanup() except: pass + + diff --git a/autotest/regression/test_mf6.py b/autotest/regression/test_mf6.py index 408165187e..7fd6e9161d 100644 --- a/autotest/regression/test_mf6.py +++ b/autotest/regression/test_mf6.py @@ -1091,7 +1091,7 @@ def test_np002(function_tmpdir, example_data_path): md2 = sim2.get_model() ghb2 = md2.get_package("ghb") spd2 = ghb2.stress_period_data.get_data(1) - assert spd2 == [] + assert len(spd2) == 0 # test paths sim_path_test = Path(ws) / "sim_path" diff --git a/autotest/regression/test_mf6_pandas.py b/autotest/regression/test_mf6_pandas.py new file mode 100644 index 0000000000..c8123396b5 --- /dev/null +++ b/autotest/regression/test_mf6_pandas.py @@ -0,0 +1,353 @@ +import copy +import os +import shutil +import sys +from pathlib import Path + +import numpy as np +import pandas +import pandas as pd +import pytest +from modflow_devtools.markers import requires_exe, requires_pkg + +import flopy +from flopy.mf6 import ( + ExtFileAction, + MFModel, + MFSimulation, + ModflowGwf, + ModflowGwfchd, + ModflowGwfdis, + ModflowGwfdisv, + ModflowGwfdrn, + ModflowGwfevt, + ModflowGwfevta, + ModflowGwfghb, + ModflowGwfgnc, + ModflowGwfgwf, + ModflowGwfgwt, + ModflowGwfhfb, + ModflowGwfic, + ModflowGwfnpf, + ModflowGwfoc, + ModflowGwfrch, + ModflowGwfrcha, + ModflowGwfriv, + ModflowGwfsfr, + ModflowGwfsto, + ModflowGwfwel, + ModflowGwtadv, + ModflowGwtdis, + ModflowGwtic, + ModflowGwtmst, + ModflowGwtoc, + ModflowGwtssm, + ModflowIms, + ModflowTdis, + ModflowUtltas, +) +from flopy.mf6.data.mfdataplist import MFPandasList + + +pytestmark = pytest.mark.mf6 + + +@requires_exe("mf6") +@pytest.mark.regression +def test_pandas_001(function_tmpdir, example_data_path): + # init paths + test_ex_name = "pd001" + model_name = "pd001_mod" + data_path = example_data_path / "mf6" / "create_tests" / test_ex_name + ws = function_tmpdir / "ws" + + expected_output_folder = data_path / "expected_output" + expected_head_file = expected_output_folder / "pd001_mod.hds" + expected_cbc_file = expected_output_folder / "pd001_mod.cbc" + + # model tests + sim = MFSimulation( + sim_name=test_ex_name, + version="mf6", + exe_name="mf6", + sim_ws=ws, + continue_=True, + memory_print_option="summary", + use_pandas=True, + ) + name = sim.name_file + assert name.continue_.get_data() + assert name.nocheck.get_data() is None + assert name.memory_print_option.get_data() == "summary" + assert sim.simulation_data.use_pandas + + tdis_rc = [(6.0, 2, 1.0), (6.0, 3, 1.0)] + tdis_package = ModflowTdis( + sim, time_units="DAYS", nper=2, perioddata=tdis_rc + ) + # replace with real ims file + ims_package = ModflowIms( + sim, + pname="my_ims_file", + filename=f"{test_ex_name}.ims", + print_option="ALL", + complexity="SIMPLE", + outer_dvclose=0.00001, + outer_maximum=50, + under_relaxation="NONE", + inner_maximum=30, + inner_dvclose=0.00001, + linear_acceleration="CG", + preconditioner_levels=7, + preconditioner_drop_tolerance=0.01, + number_orthogonalizations=2, + ) + model = ModflowGwf( + sim, modelname=model_name, model_nam_file=f"{model_name}.nam" + ) + top = {"filename": "top.txt", "data": 100.0} + botm = {"filename": "botm.txt", "data": 50.0} + dis_package = ModflowGwfdis( + model, + length_units="FEET", + nlay=1, + nrow=1, + ncol=10, + delr=500.0, + delc=500.0, + top=top, + botm=botm, + filename=f"{model_name}.dis", + pname="mydispkg", + ) + ic_package = ModflowGwfic(model, strt=80.0, filename=f"{model_name}.ic") + npf_package = ModflowGwfnpf( + model, + save_flows=True, + alternative_cell_averaging="logarithmic", + icelltype=1, + k=5.0, + ) + oc_package = ModflowGwfoc( + model, + budget_filerecord=[("np001_mod 1.cbc",)], + head_filerecord=[("np001_mod 1.hds",)], + saverecord={ + 0: [("HEAD", "ALL"), ("BUDGET", "ALL")], + 1: [], + }, + printrecord=[("HEAD", "ALL")], + ) + empty_sp_text = oc_package.saverecord.get_file_entry(1) + assert empty_sp_text == "" + oc_package.printrecord.add_transient_key(1) + oc_package.printrecord.set_data([("HEAD", "ALL"), ("BUDGET", "ALL")], 1) + oc_package.saverecord.set_data([("HEAD", "ALL"), ("BUDGET", "ALL")], 1) + sto_package = ModflowGwfsto( + model, save_flows=True, iconvert=1, ss=0.000001, sy=0.15 + ) + + # test saving a text file with recarray data + data_line = [((0, 0, 4), -2000.0), ((0, 0, 7), -2.0)] + type_list = [("cellid", object), ("q", float)] + data_rec = np.rec.array(data_line, type_list) + well_spd = { + 0: { + "filename": "wel_0.txt", + "data": data_rec, + }, + 1: None, + } + wel_package = ModflowGwfwel( + model, + filename=f"{model_name}.wel", + print_input=True, + print_flows=True, + save_flows=True, + maxbound=2, + stress_period_data=well_spd, + ) + + wel_package.stress_period_data.add_transient_key(1) + # text user generated pandas dataframe without headers + data_pd = pd.DataFrame([(0, 0, 4, -1000.0), (0, 0, 7, -20.0)]) + wel_package.stress_period_data.set_data( + {1: {"filename": "wel_1.txt", "iprn": 1, "data": data_pd}} + ) + + # test getting data + assert isinstance(wel_package.stress_period_data, MFPandasList) + well_data_pd = wel_package.stress_period_data.get_dataframe(0) + assert isinstance(well_data_pd, pd.DataFrame) + assert well_data_pd.iloc[0, 0] == 0 + assert well_data_pd.iloc[0, 1] == 0 + assert well_data_pd.iloc[0, 2] == 4 + assert well_data_pd.iloc[0, 3] == -2000.0 + assert well_data_pd["layer"][0] == 0 + assert well_data_pd["row"][0] == 0 + assert well_data_pd["column"][0] == 4 + assert well_data_pd["q"][0] == -2000.0 + assert well_data_pd["layer"][1] == 0 + assert well_data_pd["row"][1] == 0 + assert well_data_pd["column"][1] == 7 + assert well_data_pd["q"][1] == -2.0 + + well_data_rec = wel_package.stress_period_data.get_data(0) + assert isinstance(well_data_rec, np.recarray) + assert well_data_rec[0][0] == (0, 0, 4) + assert well_data_rec[0][1] == -2000.0 + + # test time series dat + drn_package = ModflowGwfdrn( + model, + print_input=True, + print_flows=True, + save_flows=True, + maxbound=1, + timeseries=[(0.0, 60.0), (100000.0, 60.0)], + stress_period_data=[((0, 0, 0), 80, "drn_1")], + ) + drn_package.ts.time_series_namerecord = "drn_1" + drn_package.ts.interpolation_methodrecord = "linearend" + + # test data with aux vars + riv_spd = { + 0: { + "filename": "riv_0.txt", + "data": [((0, 0, 9), 110, 90.0, 100.0, 1.0, 2.0, 3.0)], + } + } + riv_package = ModflowGwfriv( + model, + print_input=True, + print_flows=True, + save_flows=True, + maxbound=1, + auxiliary=["var1", "var2", "var3"], + stress_period_data=riv_spd, + ) + riv_data = riv_package.stress_period_data.get_data(0) + assert riv_data[0][0] == (0, 0, 9) + assert riv_data[0][1] == 110 + assert riv_data[0][2] == 90.0 + assert riv_data[0][3] == 100.0 + assert riv_data[0][4] == 1.0 + assert riv_data[0][5] == 2.0 + assert riv_data[0][6] == 3.0 + + # write simulation to new location + sim.write_simulation() + + # run simulation + success, buff = sim.run_simulation() + assert success, f"simulation {sim.name} did not run" + + # modify external files to resemble user generated text + wel_file_0_pth = os.path.join(sim.sim_path, "wel_0.txt") + wel_file_1_pth = os.path.join(sim.sim_path, "wel_1.txt") + riv_file_0_pth = os.path.join(sim.sim_path, "riv_0.txt") + with open(wel_file_0_pth, "w") as fd_wel_0: + fd_wel_0.write("# comment header\n\n") + fd_wel_0.write("1 1 5 -2000.0 # comment\n") + fd_wel_0.write("# more comments\n") + fd_wel_0.write("1 1 8 -2.0\n") + fd_wel_0.write("# more comments\n") + + with open(wel_file_1_pth, "w") as fd_wel_1: + fd_wel_1.write("# comment header\n\n") + fd_wel_1.write("\t1\t1\t5\t-1000.0\t# comment\n") + fd_wel_1.write("# more comments\n") + fd_wel_1.write("1 1\t8\t-20.0\n") + fd_wel_1.write("# more comments\n") + + with open(riv_file_0_pth, "w") as fd_riv_0: + fd_riv_0.write("# comment header\n\n") + fd_riv_0.write( + "1\t1\t10\t110\t9.00000000E+01\t1.00000000E+02" + "\t1.00000000E+00\t2.00000000E+00\t3.00000000E+00" + "\t# comment\n" + ) + + # test loading and checking data + test_sim = MFSimulation.load( + test_ex_name, + "mf6", + "mf6", + sim.sim_path, + write_headers=False, + ) + test_mod = test_sim.get_model() + test_wel = test_mod.get_package("wel") + + well_data_pd_0 = test_wel.stress_period_data.get_dataframe(0) + assert isinstance(well_data_pd_0, pd.DataFrame) + assert well_data_pd_0.iloc[0, 0] == 0 + assert well_data_pd_0.iloc[0, 1] == 0 + assert well_data_pd_0.iloc[0, 2] == 4 + assert well_data_pd_0.iloc[0, 3] == -2000.0 + assert well_data_pd_0["layer"][0] == 0 + assert well_data_pd_0["row"][0] == 0 + assert well_data_pd_0["column"][0] == 4 + assert well_data_pd_0["q"][0] == -2000.0 + assert well_data_pd_0["layer"][1] == 0 + assert well_data_pd_0["row"][1] == 0 + assert well_data_pd_0["column"][1] == 7 + assert well_data_pd_0["q"][1] == -2.0 + well_data_pd = test_wel.stress_period_data.get_dataframe(1) + assert isinstance(well_data_pd, pd.DataFrame) + assert well_data_pd.iloc[0, 0] == 0 + assert well_data_pd.iloc[0, 1] == 0 + assert well_data_pd.iloc[0, 2] == 4 + assert well_data_pd.iloc[0, 3] == -1000.0 + assert well_data_pd["layer"][0] == 0 + assert well_data_pd["row"][0] == 0 + assert well_data_pd["column"][0] == 4 + assert well_data_pd["q"][0] == -1000.0 + assert well_data_pd["layer"][1] == 0 + assert well_data_pd["row"][1] == 0 + assert well_data_pd["column"][1] == 7 + assert well_data_pd["q"][1] == -20.0 + test_riv = test_mod.get_package("riv") + riv_data_pd = test_riv.stress_period_data.get_dataframe(0) + assert riv_data_pd.iloc[0, 0] == 0 + assert riv_data_pd.iloc[0, 1] == 0 + assert riv_data_pd.iloc[0, 2] == 9 + assert riv_data_pd.iloc[0, 3] == 110 + assert riv_data_pd.iloc[0, 4] == 90.0 + assert riv_data_pd.iloc[0, 5] == 100.0 + assert riv_data_pd.iloc[0, 6] == 1.0 + assert riv_data_pd.iloc[0, 7] == 2.0 + assert riv_data_pd.iloc[0, 8] == 3.0 + + well_data_array = test_wel.stress_period_data.to_array() + assert "q" in well_data_array + array = np.array([0.0, 0.0, 0.0, 0.0, -2000.0, 0.0, 0.0, -2.0, 0.0, 0.0]) + array.reshape((1, 1, 10)) + compare = well_data_array["q"] == array + assert compare.all() + + well_data_record = test_wel.stress_period_data.get_record() + assert 0 in well_data_record + assert "binary" in well_data_record[0] + well_data_record[0]["binary"] = True + well_data_pd_0.iloc[0, 3] = -10000.0 + well_data_record[0]["data"] = well_data_pd_0 + test_wel.stress_period_data.set_record(well_data_record) + + updated_record = test_wel.stress_period_data.get_record(data_frame=True) + assert 0 in updated_record + assert "binary" in updated_record[0] + assert updated_record[0]["binary"] + assert isinstance(updated_record[0]["data"], pandas.DataFrame) + assert updated_record[0]["data"]["q"][0] == -10000 + + record = [0, 0, 2, -111.0] + test_wel.stress_period_data.append_list_as_record(record, 0) + + combined_data = test_wel.stress_period_data.get_dataframe(0) + assert len(combined_data.axes[0]) == 3 + assert combined_data["q"][2] == -111.0 + + test_drn = test_mod.get_package("drn") + file_entry = test_drn.stress_period_data.get_file_entry() + assert file_entry.strip() == "1 1 1 80 drn_1" diff --git a/autotest/test_mf6.py b/autotest/test_mf6.py index 71d231da40..0bf3832f21 100644 --- a/autotest/test_mf6.py +++ b/autotest/test_mf6.py @@ -1817,7 +1817,7 @@ def test_array(function_tmpdir): drn_gd_1 = drn.stress_period_data.get_data(1) assert drn_gd_1 is None drn_gd_2 = drn.stress_period_data.get_data(2) - assert drn_gd_2 == [] + assert len(drn_gd_2) == 0 drn_gd_3 = drn.stress_period_data.get_data(3) assert drn_gd_3[0][1] == 55.0 @@ -1964,7 +1964,7 @@ def test_array(function_tmpdir): drn_gd_1 = drn.stress_period_data.get_data(1) assert drn_gd_1 is None drn_gd_2 = drn.stress_period_data.get_data(2) - assert drn_gd_2 == [] + assert len(drn_gd_2) == 0 drn_gd_3 = drn.stress_period_data.get_data(3) assert drn_gd_3[0][1] == 55.0 diff --git a/autotest/test_model_splitter.py b/autotest/test_model_splitter.py index e743399d91..e28e809a47 100644 --- a/autotest/test_model_splitter.py +++ b/autotest/test_model_splitter.py @@ -344,19 +344,15 @@ def test_control_records(function_tmpdir): "Binary file input not being preserved for MFArray" ) - spd_ls1 = ml1.wel.stress_period_data._data_storage[ - 1 - ].layer_storage.multi_dim_list[0] - spd_ls2 = ml1.wel.stress_period_data._data_storage[ - 2 - ].layer_storage.multi_dim_list[0] - - if spd_ls1.data_storage_type.value != 3 or spd_ls1.binary: + spd_ls1 = ml1.wel.stress_period_data.get_record(1) + spd_ls2 = ml1.wel.stress_period_data.get_record(2) + + if spd_ls1["filename"] is None or spd_ls1["binary"]: raise AssertionError( "External ascii files not being preserved for MFList" ) - if spd_ls2.data_storage_type.value != 3 or not spd_ls2.binary: + if spd_ls2["filename"] is None or not spd_ls2["binary"]: raise AssertionError( "External binary file input not being preseved for MFList" ) diff --git a/flopy/mf6/data/dfn/gwf-chd.dfn b/flopy/mf6/data/dfn/gwf-chd.dfn index 543433c1b3..6fbf9c1220 100644 --- a/flopy/mf6/data/dfn/gwf-chd.dfn +++ b/flopy/mf6/data/dfn/gwf-chd.dfn @@ -1,5 +1,6 @@ # --------------------- gwf chd options --------------------- # flopy multi-package +# package-type stress-package block options name auxiliary diff --git a/flopy/mf6/data/dfn/gwf-drn.dfn b/flopy/mf6/data/dfn/gwf-drn.dfn index c03a107800..a1ee3aeb0a 100644 --- a/flopy/mf6/data/dfn/gwf-drn.dfn +++ b/flopy/mf6/data/dfn/gwf-drn.dfn @@ -1,5 +1,6 @@ # --------------------- gwf drn options --------------------- # flopy multi-package +# package-type stress-package block options name auxiliary diff --git a/flopy/mf6/data/dfn/gwf-evt.dfn b/flopy/mf6/data/dfn/gwf-evt.dfn index b66f62301c..7073307a0e 100644 --- a/flopy/mf6/data/dfn/gwf-evt.dfn +++ b/flopy/mf6/data/dfn/gwf-evt.dfn @@ -1,5 +1,6 @@ # --------------------- gwf evt options --------------------- # flopy multi-package +# package-type stress-package block options name fixed_cell diff --git a/flopy/mf6/data/dfn/gwf-evta.dfn b/flopy/mf6/data/dfn/gwf-evta.dfn index 19ca3cec45..22e3060fad 100644 --- a/flopy/mf6/data/dfn/gwf-evta.dfn +++ b/flopy/mf6/data/dfn/gwf-evta.dfn @@ -1,5 +1,6 @@ # --------------------- gwf evta options --------------------- # flopy multi-package +# package-type stress-package block options name readasarrays diff --git a/flopy/mf6/data/dfn/gwf-ghb.dfn b/flopy/mf6/data/dfn/gwf-ghb.dfn index f84a370869..bab639ba85 100644 --- a/flopy/mf6/data/dfn/gwf-ghb.dfn +++ b/flopy/mf6/data/dfn/gwf-ghb.dfn @@ -1,5 +1,6 @@ # --------------------- gwf ghb options --------------------- # flopy multi-package +# package-type stress-package block options name auxiliary diff --git a/flopy/mf6/data/dfn/gwf-lak.dfn b/flopy/mf6/data/dfn/gwf-lak.dfn index d73c4d13b7..3dc9e940c0 100644 --- a/flopy/mf6/data/dfn/gwf-lak.dfn +++ b/flopy/mf6/data/dfn/gwf-lak.dfn @@ -1,5 +1,6 @@ # --------------------- gwf lak options --------------------- # flopy multi-package +# package-type advanced-stress-package block options name auxiliary diff --git a/flopy/mf6/data/dfn/gwf-maw.dfn b/flopy/mf6/data/dfn/gwf-maw.dfn index d9bf3912e4..4a7784baf3 100644 --- a/flopy/mf6/data/dfn/gwf-maw.dfn +++ b/flopy/mf6/data/dfn/gwf-maw.dfn @@ -1,5 +1,6 @@ # --------------------- gwf maw options --------------------- # flopy multi-package +# package-type advanced-stress-package block options name auxiliary diff --git a/flopy/mf6/data/dfn/gwf-rch.dfn b/flopy/mf6/data/dfn/gwf-rch.dfn index 0a36ae078b..d452cdead4 100644 --- a/flopy/mf6/data/dfn/gwf-rch.dfn +++ b/flopy/mf6/data/dfn/gwf-rch.dfn @@ -1,5 +1,6 @@ # --------------------- gwf rch options --------------------- # flopy multi-package +# package-type stress-package block options name fixed_cell diff --git a/flopy/mf6/data/dfn/gwf-rcha.dfn b/flopy/mf6/data/dfn/gwf-rcha.dfn index bc5874ef59..b38ad9148a 100644 --- a/flopy/mf6/data/dfn/gwf-rcha.dfn +++ b/flopy/mf6/data/dfn/gwf-rcha.dfn @@ -1,5 +1,6 @@ # --------------------- gwf rcha options --------------------- # flopy multi-package +# package-type stress-package block options name readasarrays diff --git a/flopy/mf6/data/dfn/gwf-riv.dfn b/flopy/mf6/data/dfn/gwf-riv.dfn index a57b653cc0..52105038b2 100644 --- a/flopy/mf6/data/dfn/gwf-riv.dfn +++ b/flopy/mf6/data/dfn/gwf-riv.dfn @@ -1,5 +1,6 @@ # --------------------- gwf riv options --------------------- # flopy multi-package +# package-type stress-package block options name auxiliary diff --git a/flopy/mf6/data/dfn/gwf-sfr.dfn b/flopy/mf6/data/dfn/gwf-sfr.dfn index 40048aeee5..24da890760 100644 --- a/flopy/mf6/data/dfn/gwf-sfr.dfn +++ b/flopy/mf6/data/dfn/gwf-sfr.dfn @@ -1,5 +1,6 @@ # --------------------- gwf sfr options --------------------- # flopy multi-package +# package-type advanced-stress-package block options name auxiliary diff --git a/flopy/mf6/data/dfn/gwf-uzf.dfn b/flopy/mf6/data/dfn/gwf-uzf.dfn index 1b61e1fb6b..fdf5078eba 100644 --- a/flopy/mf6/data/dfn/gwf-uzf.dfn +++ b/flopy/mf6/data/dfn/gwf-uzf.dfn @@ -1,5 +1,6 @@ # --------------------- gwf uzf options --------------------- # flopy multi-package +# package-type advanced-stress-package block options name auxiliary diff --git a/flopy/mf6/data/dfn/gwf-wel.dfn b/flopy/mf6/data/dfn/gwf-wel.dfn index 4f7710ac5d..56a1293f9c 100644 --- a/flopy/mf6/data/dfn/gwf-wel.dfn +++ b/flopy/mf6/data/dfn/gwf-wel.dfn @@ -1,5 +1,6 @@ # --------------------- gwf wel options --------------------- # flopy multi-package +# package-type stress-package block options name auxiliary diff --git a/flopy/mf6/data/mfdata.py b/flopy/mf6/data/mfdata.py index f4ea5ee043..2aafe39f07 100644 --- a/flopy/mf6/data/mfdata.py +++ b/flopy/mf6/data/mfdata.py @@ -95,9 +95,6 @@ def update_transient_key(self, old_transient_key, new_transient_key): # update current key self._current_key = new_transient_key - def _transient_setup(self, data_storage): - self._data_storage = data_storage - def get_data_prep(self, transient_key=0): if isinstance(transient_key, int): self._verify_sp(transient_key) @@ -596,26 +593,43 @@ def _get_external_formatting_string(self, layer, ext_file_action): else: layer_storage = storage.layer_storage[layer] # resolve external file path + ret, fname = self._get_external_formatting_str( + layer_storage.fname, + layer_storage.factor, + layer_storage.binary, + layer_storage.iprn, + storage.data_structure_type, + ext_file_action, + ) + layer_storage.fname = fname + return ret + + def _get_external_formatting_str( + self, fname, factor, binary, iprn, data_type, ext_file_action + ): file_mgmt = self._simulation_data.mfpath model_name = self._data_dimensions.package_dim.model_dim[0].model_name ext_file_path = file_mgmt.get_updated_path( - layer_storage.fname, model_name, ext_file_action + fname, model_name, ext_file_action ) - layer_storage.fname = datautil.clean_filename(ext_file_path) + fname = datautil.clean_filename(ext_file_path) ext_format = ["OPEN/CLOSE", f"'{ext_file_path}'"] - if storage.data_structure_type != DataStructureType.recarray: - if layer_storage.factor is not None: + if data_type != DataStructureType.recarray: + if factor is not None: data_type = self.structure.get_datum_type( return_enum_type=True ) ext_format.append("FACTOR") if data_type == DatumType.integer: - ext_format.append(str(int(layer_storage.factor))) + ext_format.append(str(int(factor))) else: - ext_format.append(str(layer_storage.factor)) - if layer_storage.binary: + ext_format.append(str(factor)) + if binary: ext_format.append("(BINARY)") - if layer_storage.iprn is not None: + if iprn is not None: ext_format.append("IPRN") - ext_format.append(str(layer_storage.iprn)) - return f"{self._simulation_data.indent_string.join(ext_format)}\n" + ext_format.append(str(iprn)) + return ( + f"{self._simulation_data.indent_string.join(ext_format)}\n", + fname, + ) diff --git a/flopy/mf6/data/mfdataarray.py b/flopy/mf6/data/mfdataarray.py index f54eec2901..51f47bf0c3 100644 --- a/flopy/mf6/data/mfdataarray.py +++ b/flopy/mf6/data/mfdataarray.py @@ -1619,7 +1619,6 @@ def __init__( dimensions=dimensions, block=block, ) - self._transient_setup(self._data_storage) self.repeating = True @property diff --git a/flopy/mf6/data/mfdatalist.py b/flopy/mf6/data/mfdatalist.py index d622a2500d..efb545bc5d 100644 --- a/flopy/mf6/data/mfdatalist.py +++ b/flopy/mf6/data/mfdatalist.py @@ -12,7 +12,7 @@ from ..mfbase import ExtFileAction, MFDataException, VerbosityLevel from ..utils.mfenums import DiscretizationType from .mfdatastorage import DataStorage, DataStorageType, DataStructureType -from .mfdatautil import to_string +from .mfdatautil import list_to_array, to_string from .mffileaccess import MFFileAccessList from .mfstructure import DatumType, MFDataStructure @@ -30,7 +30,7 @@ class MFList(mfdata.MFMultiDimVar, DataListInterface): data contained in the simulation structure : MFDataStructure describes the structure of the data - data : list or ndarray + data : list or ndarray or None actual data enable : bool enable/disable the array @@ -141,66 +141,9 @@ def to_array(self, kper=0, mask=False): Dictionary of 3-D numpy arrays containing the stress period data for a selected stress period. The dictionary keys are the MFDataList dtype names for the stress period data.""" - i0 = 1 sarr = self.get_data(key=kper) - if not isinstance(sarr, list): - sarr = [sarr] - if len(sarr) == 0 or sarr[0] is None: - return None - if "inode" in sarr[0].dtype.names: - raise NotImplementedError() - arrays = {} model_grid = self._data_dimensions.get_model_grid() - - if model_grid._grid_type.value == 1: - shape = ( - model_grid.num_layers(), - model_grid.num_rows(), - model_grid.num_columns(), - ) - elif model_grid._grid_type.value == 2: - shape = ( - model_grid.num_layers(), - model_grid.num_cells_per_layer(), - ) - else: - shape = (model_grid.num_cells_per_layer(),) - - for name in sarr[0].dtype.names[i0:]: - if not sarr[0].dtype.fields[name][0] == object: - arr = np.zeros(shape) - arrays[name] = arr.copy() - - if np.isscalar(sarr[0]): - # if there are no entries for this kper - if sarr[0] == 0: - if mask: - for name, arr in arrays.items(): - arrays[name][:] = np.NaN - return arrays - else: - raise Exception("MfList: something bad happened") - - for name, arr in arrays.items(): - cnt = np.zeros(shape, dtype=np.float64) - for sp_rec in sarr: - if sp_rec is not None: - for rec in sp_rec: - arr[rec["cellid"]] += rec[name] - cnt[rec["cellid"]] += 1.0 - # average keys that should not be added - if name != "cond" and name != "flux": - idx = cnt > 0.0 - arr[idx] /= cnt[idx] - if mask: - arr = np.ma.masked_where(cnt == 0.0, arr) - arr[cnt == 0.0] = np.NaN - - arrays[name] = arr.copy() - # elif mask: - # for name, arr in arrays.items(): - # arrays[name][:] = np.NaN - return arrays + return list_to_array(sarr, model_grid, kper, mask) def new_simulation(self, sim_data): """Initialize MFList object for a new simulation. @@ -1396,7 +1339,7 @@ def load( Parameters ---------- - first_line : str + first_line : str, None A string containing the first line of data in this list. file_handle : file descriptor A file handle for the data file which points to the second @@ -1578,7 +1521,6 @@ def __init__( package=package, block=block, ) - self._transient_setup(self._data_storage) self.repeating = True @property diff --git a/flopy/mf6/data/mfdataplist.py b/flopy/mf6/data/mfdataplist.py new file mode 100644 index 0000000000..b0d921bf9d --- /dev/null +++ b/flopy/mf6/data/mfdataplist.py @@ -0,0 +1,2499 @@ +import copy +import inspect +import io +import os +import sys + +import numpy as np +import pandas + +from ...datbase import DataListInterface, DataType +from ...discretization.structuredgrid import StructuredGrid +from ...discretization.unstructuredgrid import UnstructuredGrid +from ...discretization.vertexgrid import VertexGrid +from ...utils import datautil +from ..data import mfdata +from ..mfbase import ExtFileAction, MFDataException, VerbosityLevel +from ..utils.mfenums import DiscretizationType +from .mfdatalist import MFList +from .mfdatastorage import DataStorageType, DataStructureType +from .mfdatautil import list_to_array, process_open_close_line +from .mffileaccess import MFFileAccessList +from .mfstructure import DatumType, MFDataStructure + + +class PandasListStorage: + """ + Contains data storage information for a single list. + + Attributes + ---------- + internal_data : ndarray or recarray + data being stored, if full data is being stored internally in memory + data_storage_type : DataStorageType + method used to store the data + fname : str + file name of external file containing the data + factor : int/float + factor to multiply the data by + iprn : int + print code + binary : bool + whether the data is stored in a binary file + modified : bool + whether data in storage has been modified since last write + + Methods + ------- + get_record : dict + returns a dictionary with all data storage information + set_record(rec) + sets data storage information based on the the dictionary "rec" + set_internal(internal_data) + make data storage internal, using "internal_data" as the data + set_external(fname, data) + make data storage external, with file "fname" and external data "data" + internal_size : int + size of the internal data + has_data : bool + whether or not data exists + """ + + def __init__(self): + self.internal_data = None + self.fname = None + self.iprn = None + self.binary = False + self.data_storage_type = None + self.modified = False + + def get_record(self): + rec = {} + if self.internal_data is not None: + rec["data"] = copy.deepcopy(self.internal_data) + if self.fname is not None: + rec["filename"] = self.fname + if self.iprn is not None: + rec["iprn"] = self.iprn + if self.binary is not None: + rec["binary"] = self.binary + return rec + + def set_record(self, rec): + if "data" in rec: + self.internal_data = rec["data"] + if "filename" in rec: + self.fname = rec["filename"] + if "iprn" in rec: + self.iprn = rec["iprn"] + if "binary" in rec: + self.binary = rec["binary"] + + def set_internal(self, internal_data): + self.data_storage_type = DataStorageType.internal_array + self.internal_data = internal_data + self.fname = None + self.binary = False + + def set_external(self, fname, data=None): + self.data_storage_type = DataStorageType.external_file + self.internal_data = data + self.fname = fname + + @property + def internal_size(self): + if not isinstance(self.internal_data, pandas.DataFrame): + return 0 + else: + return len(self.internal_data) + + def has_data(self): + if self.data_storage_type == DataStorageType.internal_array: + return self.internal_data is not None + else: + return self.fname is not None + + +class MFPandasList(mfdata.MFMultiDimVar, DataListInterface): + """ + Provides an interface for the user to access and update MODFLOW + list data using Pandas. MFPandasList objects are not designed to be + directly constructed by the end user. When a flopy for MODFLOW 6 package + object is constructed, the appropriate MFList objects are automatically + built. + + Parameters + ---------- + sim_data : MFSimulationData + data contained in the simulation + model_or_sim : MFSimulation or MFModel + parent model, or if not part of a model, parent simulation + structure : MFDataStructure + describes the structure of the data + data : list or ndarray or None + actual data + enable : bool + enable/disable the array + path : tuple + path in the data dictionary to this MFArray + dimensions : MFDataDimensions + dimension information related to the model, package, and array + package : MFPackage + parent package + block : MFBlock + parnet block + """ + + def __init__( + self, + sim_data, + model_or_sim, + structure, + data=None, + enable=None, + path=None, + dimensions=None, + package=None, + block=None, + ): + super().__init__( + sim_data, model_or_sim, structure, enable, path, dimensions + ) + self._data_storage = self._new_storage() + self._package = package + self._block = block + self._last_line_info = [] + self._data_line = None + self._temp_dict = {} + self._crnt_line_num = 1 + self._data_header = None + self._header_names = None + self._data_types = None + self._data_item_names = None + self._mg = None + self._current_key = 0 + self._max_file_size = 1000000000000000 + if self._model_or_sim.type == "Model": + self._mg = self._model_or_sim.modelgrid + + if data is not None: + try: + self.set_data(data, True) + except Exception as ex: + type_, value_, traceback_ = sys.exc_info() + raise MFDataException( + structure.get_model(), + structure.get_package(), + path, + "setting data", + structure.name, + inspect.stack()[0][3], + type_, + value_, + traceback_, + None, + sim_data.debug, + ex, + ) + + @property + def data_type(self): + """Type of data (DataType) stored in the list""" + return DataType.list + + @property + def package(self): + """Package object that this data belongs to.""" + return self._package + + @property + def dtype(self): + """Type of data (numpy.dtype) stored in the list""" + return self.get_dataframe().dtype + + def _append_type_list(self, data_name, data_type, include_header=False): + if include_header: + self._data_header[data_name] = data_type + self._header_names.append(data_name) + self._data_types.append(data_type) + + def _process_open_close_line(self, arr_line, store=True): + """ + Process open/close line extracting the multiplier, print format, + binary flag, data file path, and any comments + """ + data_dim = self._data_dimensions + ( + multiplier, + print_format, + binary, + data_file, + data, + comment, + ) = process_open_close_line( + arr_line, + data_dim, + self._data_type, + self._simulation_data.debug, + store, + ) + # add to active list of external files + model_name = data_dim.package_dim.model_dim[0].model_name + self._simulation_data.mfpath.add_ext_file(data_file, model_name) + + return data, multiplier, print_format, binary, data_file + + def _add_cellid_fields(self, data, keep_existing=False): + """ + Add cellid fields to a Pandas DataFrame and drop the layer, + row, column, cell, node, fields that the cellid is based on + """ + for data_item in self.structure.data_item_structures: + if data_item.type == DatumType.integer: + if data_item.name.lower() == "cellid": + columns = data.columns.tolist() + if isinstance(self._mg, StructuredGrid): + if ( + "layer" in columns + and "row" in columns + and "column" in columns + ): + data["cellid"] = data[ + ["layer", "row", "column"] + ].apply(tuple, axis=1) + if not keep_existing: + data = data.drop( + columns=["layer", "row", "column"] + ) + elif isinstance(self._mg, VertexGrid): + cell_2 = None + if "cell" in columns: + cell_2 = "cell" + elif "ncpl" in columns: + cell_2 = "ncpl" + if cell_2 is not None and "layer" in columns: + data["cellid"] = data[["layer", cell_2]].apply( + tuple, axis=1 + ) + if not keep_existing: + data = data.drop(columns=["layer", cell_2]) + elif isinstance(self._mg, UnstructuredGrid): + if "node" in columns: + data["cellid"] = data[["node"]].apply( + tuple, axis=1 + ) + if not keep_existing: + data = data.drop(columns=["node"]) + else: + raise MFDataException( + "ERROR: Unrecognized model grid " + "{str(self._mg)} not supported by MFBasicList" + ) + # reorder columns + column_headers = data.columns.tolist() + column_headers.insert(0, column_headers.pop()) + data = data[column_headers] + + return data + + def _remove_cellid_fields(self, data): + """remove cellid fields from data""" + for data_item in self.structure.data_item_structures: + if data_item.type == DatumType.integer: + if data_item.name.lower() == "cellid": + # if there is a cellid field, remove it + if "cellid" in data.columns: + return data.drop("cellid", axis=1) + return data + + def _get_cellid_size(self, data_item_name): + """get the number of spatial coordinates used in the cellid""" + model_num = datautil.DatumUtil.cellid_model_num( + data_item_name, + self._data_dimensions.structure.model_data, + self._data_dimensions.package_dim.model_dim, + ) + model_grid = self._data_dimensions.get_model_grid(model_num=model_num) + return model_grid.get_num_spatial_coordinates() + + def _build_data_header(self): + """ + Constructs lists of data column header names and data column types + based on data structure information, boundname and aux information, + and model discretization type. + """ + # initialize + self._data_header = {} + self._header_names = [] + self._data_types = [] + self._data_item_names = [] + s_type = pandas.StringDtype + f_type = np.float64 + i_type = np.int64 + data_dim = self._data_dimensions + # loop through data structure definition information + for data_item, index in zip( + self.structure.data_item_structures, + range(0, len(self.structure.data_item_structures)), + ): + if data_item.name.lower() == "aux": + # get all of the aux variables for this dataset + aux_var_names = data_dim.package_dim.get_aux_variables() + if aux_var_names is not None: + for aux_var_name in aux_var_names[0]: + if aux_var_name.lower() != "auxiliary": + self._append_type_list(aux_var_name, f_type) + self._data_item_names.append(aux_var_name) + elif data_item.name.lower() == "boundname": + # see if boundnames is enabled for this dataset + if data_dim.package_dim.boundnames(): + self._append_type_list("boundname", s_type) + self._data_item_names.append("boundname") + else: + if data_item.type == DatumType.keyword: + self._append_type_list(data_item.name, s_type) + elif data_item.type == DatumType.string: + self._append_type_list(data_item.name, s_type) + elif data_item.type == DatumType.integer: + if data_item.name.lower() == "cellid": + # get the appropriate cellid column headings for the + # model's discretization type + if isinstance(self._mg, StructuredGrid): + self._append_type_list("layer", i_type, True) + self._append_type_list("row", i_type, True) + self._append_type_list("column", i_type, True) + elif isinstance(self._mg, VertexGrid): + self._append_type_list("layer", i_type, True) + self._append_type_list("cell", i_type, True) + elif isinstance(self._mg, UnstructuredGrid): + self._append_type_list("node", i_type, True) + else: + raise MFDataException( + "ERROR: Unrecognized model grid " + "{str(self._mg)} not supported by MFBasicList" + ) + else: + self._append_type_list(data_item.name, i_type) + elif data_item.type == DatumType.double_precision: + self._append_type_list(data_item.name, f_type) + else: + self._data_header = None + self._header_names = None + self._data_item_names.append(data_item.name) + + @staticmethod + def _unique_column_name(data, col_base_name): + """generate a unique column name based on "col_base_name" """ + col_name = col_base_name + idx = 2 + while col_name in data: + col_name = f"{col_base_name}_{idx}" + idx += 1 + return col_name + + def _untuple_cellids(self, pdata): + """ + For all cellids in "pdata", convert them to layer, row, column fields and + and then drop the cellids from "pdata". Returns the updated "pdata". + """ + if pdata is None or len(pdata) == 0: + return pdata, 0 + fields_to_correct = [] + data_idx = 0 + # find cellid columns that need to be fixed + columns = pdata.columns + for data_item in self.structure.data_item_structures: + if data_idx >= len(columns) + 1: + break + if ( + data_item.type == DatumType.integer + and data_item.name.lower() == "cellid" + ): + if isinstance(pdata.iloc[0, data_idx], tuple): + fields_to_correct.append((data_idx, columns[data_idx])) + data_idx += 1 + else: + data_idx += self._get_cellid_size(data_item.name) + else: + data_idx += 1 + + # fix columns + for field_idx, column_name in fields_to_correct: + # add individual layer/row/column/cell/node columns + if isinstance(self._mg, StructuredGrid): + pdata.insert( + loc=field_idx, + column=self._unique_column_name(pdata, "layer"), + value=pdata.apply(lambda x: x[column_name][0], axis=1), + ) + pdata.insert( + loc=field_idx + 1, + column=self._unique_column_name(pdata, "row"), + value=pdata.apply(lambda x: x[column_name][1], axis=1), + ) + pdata.insert( + loc=field_idx + 2, + column=self._unique_column_name(pdata, "column"), + value=pdata.apply(lambda x: x[column_name][2], axis=1), + ) + elif isinstance(self._mg, VertexGrid): + pdata.insert( + loc=field_idx, + column=self._unique_column_name(pdata, "layer"), + value=pdata.apply(lambda x: x[column_name][0], axis=1), + ) + pdata.insert( + loc=field_idx + 1, + column=self._unique_column_name(pdata, "cell"), + value=pdata.apply(lambda x: x[column_name][1], axis=1), + ) + elif isinstance(self._mg, UnstructuredGrid): + if column_name == "node": + # fixing a problem where node was specified as a tuple + # make sure new column is named properly + column_name = "node_2" + pdata = pdata.rename(columns={"node": column_name}) + pdata.insert( + loc=field_idx, + column=self._unique_column_name(pdata, "node"), + value=pdata.apply(lambda x: x[column_name][0], axis=1), + ) + # remove cellid tuple + pdata = pdata.drop(column_name, axis=1) + return pdata, len(fields_to_correct) + + def _resolve_columns(self, data): + """resolve the column headings for a specific dataset provided""" + if len(data) == 0: + return self._header_names, False + if len(data[0]) == len(self._header_names) or len(data[0]) == 0: + return self._header_names, False + + if len(data[0]) == len(self._data_item_names): + return self._data_item_names, True + + if ( + len(data[0]) == len(self._data_item_names) - 1 + and self._data_item_names[-1] == "boundname" + ): + return self._data_item_names[:-1], True + + return None, None + + def _untuple_recarray(self, rec): + rec_list = rec.tolist() + for row, line in enumerate(rec_list): + for column, data in enumerate(line): + if isinstance(data, tuple) and len(data) == 1: + line_lst = list(line) + line_lst[column] = data[0] + rec_list[row] = tuple(line_lst) + return rec_list + + def set_data(self, data, autofill=False, check_data=True, append=False): + """Sets the contents of the data to "data". Data can have the + following formats: + 1) recarray - recarray containing the datalist + 2) [(line_one), (line_two), ...] - list where each line of the + datalist is a tuple within the list + If the data is transient, a dictionary can be used to specify each + stress period where the dictionary key is - 1 and + the dictionary value is the datalist data defined above: + {0:ndarray, 1:[(line_one), (line_two), ...], 2:{'filename':filename}) + + Parameters + ---------- + data : ndarray/list/dict + Data to set + autofill : bool + Automatically correct data + check_data : bool + Whether to verify the data + append : bool + Append to existing data + + """ # (re)build data header + self._build_data_header() + if isinstance(data, dict) and not self.has_data(): + MFPandasList.set_record(self, data) + return + if isinstance(data, np.recarray): + # verify data shape of data (recarray) + if len(data) == 0: + # create empty dataset + data = pandas.DataFrame(columns=self._header_names) + elif len(data[0]) != len(self._header_names): + if len(data[0]) == len(self._data_item_names): + # data most likely being stored with cellids as tuples, + # create a dataframe and untuple the cellids + data = pandas.DataFrame( + data, columns=self._data_item_names + ) + data = self._untuple_cellids(data)[0] + # make sure columns are still in correct order + data = pandas.DataFrame(data, columns=self._header_names) + else: + raise MFDataException( + f"ERROR: Data list {self._data_name} supplied the " + f"wrong number of columns of data, expected " + f"{len(self._data_item_names)} got {len(data[0])}." + ) + else: + # data size matches the expected header names, create a pandas + # dataframe from the data + data_new = pandas.DataFrame(data, columns=self._header_names) + if not self._dataframe_check(data_new): + data_list = self._untuple_recarray(data) + data = pandas.DataFrame( + data_list, columns=self._header_names + ) + else: + data, count = self._untuple_cellids(data_new) + if count > 0: + # make sure columns are still in correct order + data = pandas.DataFrame( + data, columns=self._header_names + ) + elif isinstance(data, list) or isinstance(data, tuple): + if not (isinstance(data[0], list) or isinstance(data[0], tuple)): + # get data in the format of a tuple of lists (or tuples) + data = [data] + # resolve the data's column headings + columns = self._resolve_columns(data)[0] + if columns is None: + message = ( + f"ERROR: Data list {self._data_name} supplied the " + f"wrong number of columns of data, expected " + f"{len(self._data_item_names)} got {len(data[0])}." + ) + type_, value_, traceback_ = sys.exc_info() + raise MFDataException( + self._data_dimensions.structure.get_model(), + self._data_dimensions.structure.get_package(), + self._data_dimensions.structure.path, + "setting list data", + self._data_dimensions.structure.name, + inspect.stack()[0][3], + type_, + value_, + traceback_, + message, + self._simulation_data.debug, + ) + + if len(data[0]) == 0: + # create empty dataset + data = pandas.DataFrame(columns=columns) + else: + # create dataset + data = pandas.DataFrame(data, columns=columns) + if ( + self._data_item_names[-1] == "boundname" + and "boundname" not in columns + ): + # add empty boundname column + data["boundname"] = "" + # get rid of tuples from cellids + data, count = self._untuple_cellids(data) + if count > 0: + # make sure columns are still in correct order + data = pandas.DataFrame(data, columns=self._header_names) + elif isinstance(data, pandas.DataFrame): + if len(data.columns) != len(self._header_names): + message = ( + f"ERROR: Data list {self._data_name} supplied the " + f"wrong number of columns of data, expected " + f"{len(self._data_item_names)} got {len(data[0])}.\n" + f"Data columns supplied: {data.columns}\n" + f"Data columns expected: {self._header_names}" + ) + type_, value_, traceback_ = sys.exc_info() + raise MFDataException( + self._data_dimensions.structure.get_model(), + self._data_dimensions.structure.get_package(), + self._data_dimensions.structure.path, + "setting list data", + self._data_dimensions.structure.name, + inspect.stack()[0][3], + type_, + value_, + traceback_, + message, + self._simulation_data.debug, + ) + # set correct data header names + data = data.set_axis(self._header_names, axis=1) + else: + message = ( + f"ERROR: Data list {self._data_name} is an unsupported type: " + f"{type(data)}." + ) + type_, value_, traceback_ = sys.exc_info() + raise MFDataException( + self._data_dimensions.structure.get_model(), + self._data_dimensions.structure.get_package(), + self._data_dimensions.structure.path, + "setting list data", + self._data_dimensions.structure.name, + inspect.stack()[0][3], + type_, + value_, + traceback_, + message, + self._simulation_data.debug, + ) + + data_storage = self._get_storage() + if append: + # append data to existing dataframe + current_data = self._get_dataframe() + if current_data is not None: + data = pandas.concat([current_data, data]) + if data_storage.data_storage_type == DataStorageType.external_file: + # store external data until next write + data_storage.internal_data = data + else: + # store data internally + data_storage.set_internal(data) + data_storage.modified = True + + def has_modified_ext_data(self): + """check to see if external data has been modified since last read""" + data_storage = self._get_storage() + return ( + data_storage.data_storage_type == DataStorageType.external_file + and data_storage.internal_data is not None + ) + + def binary_ext_data(self): + """check for binary data""" + data_storage = self._get_storage() + return data_storage.binary + + def to_array(self, kper=0, mask=False): + """Convert stress period boundary condition (MFDataList) data for a + specified stress period to a 3-D numpy array. + + Parameters + ---------- + kper : int + MODFLOW zero-based stress period number to return (default is + zero) + mask : bool + return array with np.NaN instead of zero + + Returns + ---------- + out : dict of numpy.ndarrays + Dictionary of 3-D numpy arrays containing the stress period data + for a selected stress period. The dictionary keys are the + MFDataList dtype names for the stress period data.""" + sarr = self.get_data(key=kper) + model_grid = self._data_dimensions.get_model_grid() + return list_to_array(sarr, model_grid, kper, mask) + + def set_record(self, record, autofill=False, check_data=True): + """Sets the contents of the data and metadata to "data_record". + Data_record is a dictionary with has the following format: + {'filename':filename, 'binary':True/False, 'data'=data} + To store to file include 'filename' in the dictionary. + + Parameters + ---------- + record : ndarray/list/dict + Data and metadata to set + autofill : bool + Automatically correct data + check_data : bool + Whether to verify the data + + """ + if isinstance(record, dict): + data_storage = self._get_storage() + if "filename" in record: + data_storage.set_external(record["filename"]) + if "binary" in record: + if ( + record["binary"] + and self._data_dimensions.package_dim.boundnames() + ): + message = ( + "Unable to store list data ({}) to a binary " + "file when using boundnames" + ".".format(self._data_dimensions.structure.name) + ) + type_, value_, traceback_ = sys.exc_info() + raise MFDataException( + self._data_dimensions.structure.get_model(), + self._data_dimensions.structure.get_package(), + self._data_dimensions.structure.path, + "writing list data to binary file", + self._data_dimensions.structure.name, + inspect.stack()[0][3], + type_, + value_, + traceback_, + message, + self._simulation_data.debug, + ) + data_storage.binary = record["binary"] + if "data" in record: + # data gets written out to file + MFPandasList.set_data(self, record["data"]) + # get file path + fd_file_path = self._get_file_path() + # make sure folder exists + folder_path = os.path.split(fd_file_path)[0] + if not os.path.exists(folder_path): + os.makedirs(folder_path) + # store data + self._write_file_entry(fd_file_path) + else: + if "data" in record: + data_storage.modified = True + data_storage.set_internal(None) + MFPandasList.set_data(self, record["data"]) + if "iprn" in record: + data_storage.iprn = record["iprn"] + + def append_data(self, data): + """Appends "data" to the end of this list. Assumes data is in a format + that can be appended directly to a pandas dataframe. + + Parameters + ---------- + data : list(tuple) + Data to append. + + """ + try: + self._resync() + if self._get_storage() is None: + self._data_storage = self._new_storage() + data_storage = self._get_storage() + if ( + data_storage.data_storage_type + == DataStorageType.internal_array + ): + # update internal data + MFPandasList.set_data(self, data, append=True) + elif ( + data_storage.data_storage_type == DataStorageType.external_file + ): + # get external data from file + external_data = self._get_dataframe() + if isinstance(data, list): + # build dataframe + data = pandas.DataFrame( + data, columns=external_data.columns + ) + # concatenate + data = pandas.concat([external_data, data]) + # store + ext_record = self._get_record() + ext_record["data"] = data + MFPandasList.set_record(self, ext_record) + except Exception as ex: + type_, value_, traceback_ = sys.exc_info() + raise MFDataException( + self.structure.get_model(), + self.structure.get_package(), + self._path, + "appending data", + self.structure.name, + inspect.stack()[0][3], + type_, + value_, + traceback_, + None, + self._simulation_data.debug, + ex, + ) + + def append_list_as_record(self, record): + """Appends the list `record` as a single record in this list's + dataframe. Assumes "data" has the correct dimensions. + + Parameters + ---------- + record : list + List to be appended as a single record to the data's existing + recarray. + + """ + self._resync() + try: + # store + self.append_data([record]) + except Exception as ex: + type_, value_, traceback_ = sys.exc_info() + raise MFDataException( + self.structure.get_model(), + self.structure.get_package(), + self._path, + "appending data", + self.structure.name, + inspect.stack()[0][3], + type_, + value_, + traceback_, + None, + self._simulation_data.debug, + ex, + ) + + def update_record(self, record, key_index): + """Updates a record at index "key_index" with the contents of "record". + If the index does not exist update_record appends the contents of + "record" to this list's recarray. + + Parameters + ---------- + record : list + New record to update data with + key_index : int + Stress period key of record to update. Only used in transient + data types. + """ + self.append_list_as_record(record) + + def store_internal( + self, + check_data=True, + ): + """Store all data internally. + + Parameters + ---------- + check_data : bool + Verify data prior to storing + + """ + storage = self._get_storage() + # check if data is already stored external + if ( + storage is None + or storage.data_storage_type == DataStorageType.external_file + ): + data = self._get_dataframe() + # if not empty dataset + if data is not None: + if ( + self._simulation_data.verbosity_level.value + >= VerbosityLevel.verbose.value + ): + print(f"Storing {self.structure.name} internally...") + internal_data = { + "data": data, + } + MFPandasList.set_record( + self, internal_data, check_data=check_data + ) + + def store_as_external_file( + self, + external_file_path, + binary=False, + replace_existing_external=True, + check_data=True, + ): + """Store all data externally in file external_file_path. the binary + allows storage in a binary file. If replace_existing_external is set + to False, this method will not do anything if the data is already in + an external file. + + Parameters + ---------- + external_file_path : str + Path to external file + binary : bool + Store data in a binary file + replace_existing_external : bool + Whether to replace an existing external file. + check_data : bool + Verify data prior to storing + + """ + # only store data externally (do not subpackage info) + if self.structure.construct_package is None: + storage = self._get_storage() + # check if data is already stored external + if ( + replace_existing_external + or storage is None + or storage.data_storage_type == DataStorageType.internal_array + or storage.data_storage_type + == DataStorageType.internal_constant + ): + data = self._get_dataframe() + # if not empty dataset + if data is not None: + if ( + self._simulation_data.verbosity_level.value + >= VerbosityLevel.verbose.value + ): + print( + "Storing {} to external file {}.." + ".".format(self.structure.name, external_file_path) + ) + external_data = { + "filename": external_file_path, + "data": data, + "binary": binary, + } + MFPandasList.set_record( + self, external_data, check_data=check_data + ) + + def external_file_name(self): + """Returns external file name, or None if this is not external data.""" + storage = self._get_storage() + if storage is None: + return None + if ( + storage.data_storage_type == DataStorageType.external_file + and storage.fname is not None + and storage.fname != "" + ): + return storage.fname + return None + + @staticmethod + def _file_data_to_memory(fd_data_file, first_line): + """ + scan data file from starting point to find the extent of the data + + Parameters + ---------- + fd_data_file : file descriptor + File with data to scan. File location should be at the beginning + of the data. + + Returns + ------- + list, str : data from file, next line in file after data + """ + data_lines = [] + clean_first_line = first_line.strip().lower() + if clean_first_line.startswith("end"): + return data_lines, fd_data_file.readline() + if len(clean_first_line) > 0 and clean_first_line[0] != "#": + data_lines.append(clean_first_line) + line = fd_data_file.readline() + while line: + line_mod = line.strip().lower() + if line_mod.startswith("end"): + return data_lines, line + if len(line_mod) > 0 and line_mod[0] != "#": + data_lines.append(line_mod) + line = fd_data_file.readline() + return data_lines, "" + + def _dataframe_check(self, data_frame): + valid = data_frame.shape[0] > 0 + if valid: + for name in self._header_names: + if ( + name != "boundname" + and data_frame[name].isnull().values.any() + ): + valid = False + break + return valid + + def _try_pandas_read(self, fd_data_file): + delimiter_list = ["\\s+", ","] + for delimiter in delimiter_list: + try: + # read flopy formatted data, entire file + data_frame = pandas.read_csv( + fd_data_file, + sep=delimiter, + names=self._header_names, + dtype=self._data_header, + comment="#", + index_col=False, + skipinitialspace=True, + ) + except BaseException: + fd_data_file.seek(0) + continue + + # basic check for valid dataset + if self._dataframe_check(data_frame): + return data_frame + else: + fd_data_file.seek(0) + return None + + def _read_text_data(self, fd_data_file, first_line, external_file=False): + """ + read list data from data file + + Parameters + ---------- + fd_data_file : file descriptor + File with data. File location should be at the beginning of the + data. + + external_file : bool + whether this is an external file + + Returns + ------- + DataFrame : file's list data + list : containing boolean for success of operation and the next line of + data in the file + """ + # initialize + data_frame = None + return_val = [False, None] + + # build header + self._build_data_header() + file_data, next_line = self._file_data_to_memory( + fd_data_file, first_line + ) + io_file_data = io.StringIO("\n".join(file_data)) + if external_file: + data_frame = self._try_pandas_read(io_file_data) + if data_frame is not None: + self._decrement_id_fields(data_frame) + else: + # get number of rows of data + if len(file_data) > 0: + data_frame = self._try_pandas_read(io_file_data) + if data_frame is not None: + self._decrement_id_fields(data_frame) + return_val = [True, fd_data_file.readline()] + + if data_frame is None: + # read user formatted data using MFList class + list_data = MFList( + self._simulation_data, + self._model_or_sim, + self.structure, + None, + True, + self.path, + self._data_dimensions.package_dim, + self._package, + self._block, + ) + # start in original location + io_file_data.seek(0) + return_val = list_data.load( + None, io_file_data, self._block.block_headers[-1] + ) + rec_array = list_data.get_data() + if rec_array is not None: + data_frame = pandas.DataFrame(rec_array) + data_frame = self._untuple_cellids(data_frame)[0] + return_val = [True, fd_data_file.readline()] + else: + data_frame = None + return data_frame, return_val + + def _save_binary_data(self, fd_data_file, data): + # write + file_access = MFFileAccessList( + self.structure, + self._data_dimensions, + self._simulation_data, + self._path, + self._current_key, + ) + file_access.write_binary_file( + self._dataframe_to_recarray(data), + fd_data_file, + self._model_or_sim.modeldiscrit, + ) + data_storage = self._get_storage() + data_storage.internal_data = None + + def has_data(self, key=None): + """Returns whether this MFList has any data associated with it.""" + try: + if self._get_storage() is None: + return False + return self._get_storage().has_data() + except Exception as ex: + type_, value_, traceback_ = sys.exc_info() + raise MFDataException( + self.structure.get_model(), + self.structure.get_package(), + self._path, + "checking for data", + self.structure.name, + inspect.stack()[0][3], + type_, + value_, + traceback_, + None, + self._simulation_data.debug, + ex, + ) + + def _load_external_data(self, data_storage): + """loads external data into a panda's dataframe""" + file_path = self._resolve_ext_file_path(data_storage) + # parse next line in file as data header + if data_storage.binary: + file_access = MFFileAccessList( + self.structure, + self._data_dimensions, + self._simulation_data, + self._path, + self._current_key, + ) + np_data = file_access.read_binary_data_from_file( + file_path, + self._model_or_sim.modeldiscrit, + build_cellid=False, + ) + pd_data = pandas.DataFrame(np_data) + if "col" in pd_data: + # keep layer/row/column names consistent + pd_data = pd_data.rename(columns={"col": "column"}) + self._decrement_id_fields(pd_data) + else: + with open(file_path, "r") as fd_data_file: + pd_data, return_val = self._read_text_data( + fd_data_file, "", True + ) + return pd_data + + def load( + self, + first_line, + file_handle, + block_header, + pre_data_comments=None, + external_file_info=None, + ): + """Loads data from first_line (the first line of data) and open file + file_handle which is pointing to the second line of data. Returns a + tuple with the first item indicating whether all data was read + and the second item being the last line of text read from the file. + This method was only designed for internal FloPy use and is not + recommended for end users. + + Parameters + ---------- + first_line : str + A string containing the first line of data in this list. + file_handle : file descriptor + A file handle for the data file which points to the second + line of data for this list + block_header : MFBlockHeader + Block header object that contains block header information + for the block containing this data + pre_data_comments : MFComment + Comments immediately prior to the data + external_file_info : list + Contains information about storing files externally + Returns + ------- + more data : bool, + next data line : str + + """ + data_storage = self._get_storage() + data_storage.modified = False + # parse first line to determine if this is internal or external data + datautil.PyListUtil.reset_delimiter_used() + arr_line = datautil.PyListUtil.split_data_line(first_line) + if arr_line and ( + len(arr_line[0]) >= 2 and arr_line[0][:3].upper() == "END" + ): + return [False, arr_line] + if len(arr_line) >= 2 and arr_line[0].upper() == "OPEN/CLOSE": + try: + ( + data, + multiplier, + iprn, + binary, + data_file, + ) = self._process_open_close_line(arr_line) + except Exception as ex: + message = ( + "An error occurred while processing the following " + "open/close line: {}".format(arr_line) + ) + type_, value_, traceback_ = sys.exc_info() + raise MFDataException( + self.structure.get_model(), + self.structure.get_package(), + self._path, + "processing open/close line", + self.structure.name, + inspect.stack()[0][3], + type_, + value_, + traceback_, + message, + self._simulation_data.debug, + ex, + ) + data_storage.set_external(data_file, data) + data_storage.binary = binary + data_storage.iprn = iprn + return_val = [False, None] + # else internal + else: + # read data into pandas dataframe + pd_data, return_val = self._read_text_data( + file_handle, first_line, False + ) + # verify this is the end of the block? + + # store internal data + data_storage.set_internal(pd_data) + return return_val + + def _new_storage(self): + return {"Data": PandasListStorage()} + + def _get_storage(self): + return self._data_storage["Data"] + + def _get_id_fields(self, data_frame): + """ + assemble a list of id fields in this dataset + + Parameters + ---------- + data_frame : DataFrame + data for this list + + Returns + ------- + list of column names that are id fields + """ + id_fields = [] + # loop through the data structure + for idx, data_item_struct in enumerate( + self.structure.data_item_structures + ): + if data_item_struct.type == DatumType.keystring: + # handle id fields for keystring + # ***Code not necessary for this version + ks_key = data_frame.iloc[0, idx].lower() + if ks_key in data_item_struct.keystring_dict: + data_item_ks = data_item_struct.keystring_dict[ks_key] + else: + ks_key = f"{ks_key}record" + if ks_key in data_item_struct.keystring_dict: + data_item_ks = data_item_struct.keystring_dict[ks_key] + else: + continue + if isinstance(data_item_ks, MFDataStructure): + dis = data_item_ks.data_item_structures + for data_item in dis: + self._update_id_fields( + id_fields, data_item, data_frame + ) + else: + self._update_id_fields(id_fields, data_item_ks, data_frame) + else: + self._update_id_fields(id_fields, data_item_struct, data_frame) + return id_fields + + def _update_id_fields(self, id_fields, data_item_struct, data_frame): + """ + update the "id_fields" list with new field(s) based on the + an item in the expected data structure and the data provided. + """ + if data_item_struct.numeric_index or data_item_struct.is_cellid: + if data_item_struct.name.lower() == "cellid": + if isinstance(self._mg, StructuredGrid): + id_fields.append("layer") + id_fields.append("row") + id_fields.append("column") + elif isinstance(self._mg, VertexGrid): + id_fields.append("layer") + id_fields.append("cell") + elif isinstance(self._mg, UnstructuredGrid): + id_fields.append("node") + else: + raise MFDataException( + "ERROR: Unrecognized model grid " + "{str(self._mg)} not supported by MFBasicList" + ) + else: + for col in data_frame.columns: + if col.startswith(data_item_struct.name): + data_item_len = len(data_item_struct.name) + if len(col) > data_item_len: + col_end = col[data_item_len:] + if ( + len(col_end) > 1 + and col_end[0] == "_" + and datautil.DatumUtil.is_int(col_end[1:]) + ): + id_fields.append(col) + else: + id_fields.append(data_item_struct.name) + + def _increment_id_fields(self, data_frame): + """increment all id fields by 1 (reverse for negative values)""" + dtypes = data_frame.dtypes + for id_field in self._get_id_fields(data_frame): + if id_field in data_frame: + if id_field in dtypes and dtypes[id_field].str != " 0 and rel_path != ".": + # include model relative path in external file path + # only if model relative path is not already in external + # file path i.e. when reading! + fp_rp_l = fp_relative.split(os.path.sep) + rp_l_r = rel_path.split(os.path.sep)[::-1] + for i, rp in enumerate(rp_l_r): + if rp != fp_rp_l[len(rp_l_r) - i - 1]: + fp_relative = os.path.join(rp, fp_relative) + fp = self._simulation_data.mfpath.resolve_path( + fp_relative, model_name + ) + else: + if fp_relative is not None: + fp = os.path.join( + self._simulation_data.mfpath.get_sim_path(), fp_relative + ) + else: + fp = self._simulation_data.mfpath.get_sim_path() + return fp + + def _dataframe_to_recarray(self, data_frame): + # convert cellids to tuple + df_rec = self._add_cellid_fields(data_frame, False) + + # convert to recarray + return df_rec.to_records(index=False) + + def _get_data(self): + dataframe = self._get_dataframe() + if dataframe is None: + return None + return self._dataframe_to_recarray(dataframe) + + def _get_dataframe(self): + """get and return dataframe for this list data""" + data_storage = self._get_storage() + if data_storage is None or data_storage.data_storage_type is None: + block_exists = self._block.header_exists( + self._current_key, self.path + ) + if block_exists: + self._build_data_header() + return pandas.DataFrame(columns=self._header_names) + else: + return None + if data_storage.data_storage_type == DataStorageType.internal_array: + data = copy.deepcopy(data_storage.internal_data) + else: + if data_storage.internal_data is not None: + # latest data is in internal cache + data = copy.deepcopy(data_storage.internal_data) + else: + # load data from file and return + data = self._load_external_data(data_storage) + return data + + def get_dataframe(self): + """Returns the list's data as a dataframe. + + Returns + ------- + data : DataFrame + + """ + return self._get_dataframe() + + def get_data(self, apply_mult=False, **kwargs): + """Returns the list's data as a recarray. + + Parameters + ---------- + apply_mult : bool + Whether to apply a multiplier. + + Returns + ------- + data : recarray + + """ + return self._get_data() + + def get_record(self, data_frame=False): + """Returns the list's data and metadata in a dictionary. Data is in + key "data" and metadata in keys "filename" and "binary". + + Returns + ------- + data_record : dict + + """ + return self._get_record(data_frame) + + def _get_record(self, data_frame=False): + """Returns the list's data and metadata in a dictionary. Data is in + key "data" and metadata in keys "filename" and "binary". + + Returns + ------- + data_record : dict + + """ + try: + if self._get_storage() is None: + return None + record = self._get_storage().get_record() + except Exception as ex: + type_, value_, traceback_ = sys.exc_info() + raise MFDataException( + self.structure.get_model(), + self.structure.get_package(), + self._path, + "getting record", + self.structure.name, + inspect.stack()[0][3], + type_, + value_, + traceback_, + None, + self._simulation_data.debug, + ex, + ) + if not data_frame: + if "data" not in record: + record["data"] = self._get_data() + elif record["data"] is not None: + data = copy.deepcopy(record["data"]) + record["data"] = self._dataframe_to_recarray(data) + else: + if "data" not in record: + record["data"] = self._get_dataframe() + return record + + def write_file_entry( + self, + fd_data_file, + ext_file_action=ExtFileAction.copy_relative_paths, + fd_main=None, + ): + """ + Writes file entry to file, or if fd_data_file is None returns file + entry as string. + + Parameters + ---------- + fd_data_file : file descriptor + where data is written + ext_file_action : ExtFileAction + What action to perform on external files + fd_main + file descriptor where open/close string should be written (for + external file data) + + Returns + ------- + file entry : str + + """ + return self._write_file_entry(fd_data_file, ext_file_action, fd_main) + + def get_file_entry( + self, + ext_file_action=ExtFileAction.copy_relative_paths, + ): + """Returns a string containing the data formatted for a MODFLOW 6 + file. + + Parameters + ---------- + ext_file_action : ExtFileAction + How to handle external paths. + + Returns + ------- + file entry : str + + """ + return self._write_file_entry(None) + + def _write_file_entry( + self, + fd_data_file, + ext_file_action=ExtFileAction.copy_relative_paths, + fd_main=None, + ): + """ + Writes file entry to file, or if fd_data_file is None returns file + entry as string. + + Parameters + ---------- + fd_data_file : file descriptor + Where data is written + ext_file_action : ExtFileAction + What action to perform on external files + fd_main + file descriptor where open/close string should be written (for + external file data) + Returns + ------- + result of pandas to_csv call + """ + data_storage = self._get_storage() + if data_storage is None: + return "" + if ( + data_storage.data_storage_type == DataStorageType.external_file + and fd_main is not None + ): + indent = self._simulation_data.indent_string + ext_string, fname = self._get_external_formatting_str( + data_storage.fname, + None, + data_storage.binary, + data_storage.iprn, + DataStructureType.recarray, + ext_file_action, + ) + data_storage.fname = fname + fd_main.write(f"{indent}{indent}{ext_string}") + if data_storage is None or data_storage.internal_data is None: + return "" + # Loop through data pieces + data = self._remove_cellid_fields(data_storage.internal_data) + if ( + data_storage.data_storage_type == DataStorageType.internal_array + or not data_storage.binary + or fd_data_file is None + ): + # add spacer column + if "leading_space" not in data: + data.insert(loc=0, column="leading_space", value="") + if "leading_space_2" not in data: + data.insert(loc=0, column="leading_space_2", value="") + + result = "" + # if data is internal or has been modified + if ( + data_storage.data_storage_type == DataStorageType.internal_array + or data is not None + or fd_data_file is None + ): + if ( + data_storage.data_storage_type == DataStorageType.external_file + and data_storage.binary + and fd_data_file is not None + ): + # write old way using numpy + self._save_binary_data(fd_data_file, data) + else: + if data.shape[0] == 0: + if fd_data_file is None or not isinstance( + fd_data_file, io.TextIOBase + ): + result = "\n" + else: + # no data, just write empty line + fd_data_file.write("\n") + else: + # convert data to 1-based + self._increment_id_fields(data) + # write converted data + float_format = ( + f"%{self._simulation_data.reg_format_str[2:-1]}" + ) + result = data.to_csv( + fd_data_file, + sep=" ", + header=False, + index=False, + float_format=float_format, + lineterminator="\n", + ) + # clean up + data_storage.modified = False + self._decrement_id_fields(data) + if ( + data_storage.data_storage_type + == DataStorageType.external_file + ): + data_storage.internal_data = None + + if data_storage.internal_data is not None: + # clean up + if "leading_space" in data_storage.internal_data: + data_storage.internal_data = data_storage.internal_data.drop( + columns="leading_space" + ) + if "leading_space_2" in data_storage.internal_data: + data_storage.internal_data = data_storage.internal_data.drop( + columns="leading_space_2" + ) + return result + + def _get_file_path(self): + """ + gets the file path to the data + + Returns + ------- + file_path : file path to data + + """ + data_storage = self._get_storage() + if data_storage.fname is None: + return None + if self._model_or_sim.type == "model": + rel_path = self._simulation_data.mfpath.model_relative_path[ + self._model_or_sim.name + ] + fp_relative = data_storage.fname + if rel_path is not None and len(rel_path) > 0 and rel_path != ".": + # include model relative path in external file path + # only if model relative path is not already in external + # file path i.e. when reading! + fp_rp_l = fp_relative.split(os.path.sep) + rp_l_r = rel_path.split(os.path.sep)[::-1] + for i, rp in enumerate(rp_l_r): + if rp != fp_rp_l[len(rp_l_r) - i - 1]: + fp_relative = os.path.join(rp, fp_relative) + return self._simulation_data.mfpath.resolve_path( + fp_relative, self._model_or_sim.name + ) + else: + return os.path.join( + self._simulation_data.mfpath.get_sim_path(), data_storage.fname + ) + + def plot( + self, + key=None, + names=None, + filename_base=None, + file_extension=None, + mflay=None, + **kwargs, + ): + """ + Plot boundary condition (MfList) data + + Parameters + ---------- + key : str + MfList dictionary key. (default is None) + names : list + List of names for figure titles. (default is None) + filename_base : str + Base file name that will be used to automatically generate file + names for output image files. Plots will be exported as image + files if file_name_base is not None. (default is None) + file_extension : str + Valid matplotlib.pyplot file extension for savefig(). Only used + if filename_base is not None. (default is 'png') + mflay : int + MODFLOW zero-based layer number to return. If None, then all + all layers will be included. (default is None) + **kwargs : dict + axes : list of matplotlib.pyplot.axis + List of matplotlib.pyplot.axis that will be used to plot + data for each layer. If axes=None axes will be generated. + (default is None) + pcolor : bool + Boolean used to determine if matplotlib.pyplot.pcolormesh + plot will be plotted. (default is True) + colorbar : bool + Boolean used to determine if a color bar will be added to + the matplotlib.pyplot.pcolormesh. Only used if pcolor=True. + (default is False) + inactive : bool + Boolean used to determine if a black overlay in inactive + cells in a layer will be displayed. (default is True) + contour : bool + Boolean used to determine if matplotlib.pyplot.contour + plot will be plotted. (default is False) + clabel : bool + Boolean used to determine if matplotlib.pyplot.clabel + will be plotted. Only used if contour=True. (default is False) + grid : bool + Boolean used to determine if the model grid will be plotted + on the figure. (default is False) + masked_values : list + List of unique values to be excluded from the plot. + + Returns + ---------- + out : list + Empty list is returned if filename_base is not None. Otherwise + a list of matplotlib.pyplot.axis is returned. + """ + from ...plot import PlotUtilities + + if not self.plottable: + raise TypeError("Simulation level packages are not plottable") + + if "cellid" not in self.dtype.names: + return + + PlotUtilities._plot_mflist_helper( + mflist=self, + key=key, + kper=None, + names=names, + filename_base=None, + file_extension=None, + mflay=None, + **kwargs, + ) + + +class MFPandasTransientList( + MFPandasList, mfdata.MFTransient, DataListInterface +): + """ + Provides an interface for the user to access and update MODFLOW transient + pandas list data. + + Parameters + ---------- + sim_data : MFSimulationData + data contained in the simulation + structure : MFDataStructure + describes the structure of the data + enable : bool + enable/disable the array + path : tuple + path in the data dictionary to this MFArray + dimensions : MFDataDimensions + dimension information related to the model, package, and array + + """ + + def __init__( + self, + sim_data, + model_or_sim, + structure, + enable=True, + path=None, + dimensions=None, + package=None, + block=None, + ): + super().__init__( + sim_data=sim_data, + model_or_sim=model_or_sim, + structure=structure, + data=None, + enable=enable, + path=path, + dimensions=dimensions, + package=package, + block=block, + ) + self.repeating = True + self.empty_keys = {} + + @property + def data_type(self): + return DataType.transientlist + + @property + def dtype(self): + data = self.get_data() + if len(data) > 0: + if 0 in data: + return data[0].dtype + else: + return next(iter(data.values())).dtype + else: + return None + + @property + def plottable(self): + """If this list data is plottable""" + if self.model is None: + return False + else: + return True + + @property + def data(self): + """Returns list data. Calls get_data with default parameters.""" + return self.get_data() + + @property + def dataframe(self): + """Returns list data. Calls get_data with default parameters.""" + return self.get_dataframe() + + def to_array(self, kper=0, mask=False): + """Returns list data as an array.""" + return super().to_array(kper, mask) + + def remove_transient_key(self, transient_key): + """Remove transient stress period key. Method is used + internally by FloPy and is not intended to the end user. + + """ + if transient_key in self._data_storage: + del self._data_storage[transient_key] + + def add_transient_key(self, transient_key): + """Adds a new transient time allowing data for that time to be stored + and retrieved using the key `transient_key`. Method is used + internally by FloPy and is not intended to the end user. + + Parameters + ---------- + transient_key : int + Zero-based stress period to add + + """ + super().add_transient_key(transient_key) + self._data_storage[transient_key] = PandasListStorage() + + def store_as_external_file( + self, + external_file_path, + binary=False, + replace_existing_external=True, + check_data=True, + ): + """Store all data externally in file external_file_path. the binary + allows storage in a binary file. If replace_existing_external is set + to False, this method will not do anything if the data is already in + an external file. + + Parameters + ---------- + external_file_path : str + Path to external file + binary : bool + Store data in a binary file + replace_existing_external : bool + Whether to replace an existing external file. + check_data : bool + Verify data prior to storing + + """ + self._cache_model_grid = True + for sp in self._data_storage.keys(): + self._current_key = sp + storage = self._get_storage() + if storage.internal_size == 0: + storage.internal_data = self.get_dataframe(sp) + if storage.internal_size > 0 and ( + self._get_storage().data_storage_type + != DataStorageType.external_file + or replace_existing_external + ): + fname, ext = os.path.splitext(external_file_path) + if datautil.DatumUtil.is_int(sp): + full_name = f"{fname}_{int(sp) + 1}{ext}" + else: + full_name = f"{fname}_{sp}{ext}" + + super().store_as_external_file( + full_name, + binary, + replace_existing_external, + check_data, + ) + self._cache_model_grid = False + + def store_internal( + self, + check_data=True, + ): + """Store all data internally. + + Parameters + ---------- + check_data : bool + Verify data prior to storing + + """ + self._cache_model_grid = True + for sp in self._data_storage.keys(): + self._current_key = sp + if ( + self._get_storage().data_storage_type + == DataStorageType.external_file + ): + super().store_internal( + check_data, + ) + self._cache_model_grid = False + + def has_data(self, key=None): + """Returns whether this MFList has any data associated with it in key + "key".""" + if key is None: + for sto_key in self._data_storage.keys(): + self.get_data_prep(sto_key) + if super().has_data(): + return True + return False + else: + self.get_data_prep(key) + return super().has_data() + + def has_modified_ext_data(self, key=None): + if key is None: + for sto_key in self._data_storage.keys(): + self.get_data_prep(sto_key) + if super().has_modified_ext_data(): + return True + return False + else: + self.get_data_prep(key) + return super().has_modified_ext_data() + + def binary_ext_data(self, key=None): + if key is None: + for sto_key in self._data_storage.keys(): + self.get_data_prep(sto_key) + if super().binary_ext_data(): + return True + return False + else: + self.get_data_prep(key) + return super().binary_ext_data() + + def get_record(self, key=None, data_frame=False): + """Returns the data for stress period `key`. If no key is specified + returns all records in a dictionary with zero-based stress period + numbers as keys. See MFList's get_record documentation for more + information on the format of each record returned. + + Parameters + ---------- + key : int + Zero-based stress period to return data from. + data_frame : bool + whether to return a Pandas DataFrame object instead of a + recarray + Returns + ------- + data_record : dict + + """ + if self._data_storage is not None and len(self._data_storage) > 0: + if key is None: + output = {} + for key in self._data_storage.keys(): + self.get_data_prep(key) + output[key] = super().get_record(data_frame=data_frame) + return output + self.get_data_prep(key) + return super().get_record() + else: + return None + + def get_dataframe(self, key=None, apply_mult=False): + return self.get_data(key, apply_mult, dataframe=True) + + def get_data(self, key=None, apply_mult=False, dataframe=False, **kwargs): + """Returns the data for stress period `key`. + + Parameters + ---------- + key : int + Zero-based stress period to return data from. + apply_mult : bool + Apply multiplier + dataframe : bool + Get as pandas dataframe + + Returns + ------- + data : recarray + + """ + if self._data_storage is not None and len(self._data_storage) > 0: + if key is None: + if "array" in kwargs: + output = [] + sim_time = self._data_dimensions.package_dim.model_dim[ + 0 + ].simulation_time + num_sp = sim_time.get_num_stress_periods() + data = None + for sp in range(0, num_sp): + if sp in self._data_storage: + self.get_data_prep(sp) + data = super().get_data(apply_mult=apply_mult) + elif self._block.header_exists(sp): + data = None + output.append(data) + return output + else: + output = {} + for key in self._data_storage.keys(): + self.get_data_prep(key) + if dataframe: + output[key] = super().get_dataframe() + else: + output[key] = super().get_data( + apply_mult=apply_mult + ) + return output + self.get_data_prep(key) + if dataframe: + return super().get_dataframe() + else: + return super().get_data(apply_mult=apply_mult) + else: + return None + + def set_record(self, record, autofill=False, check_data=True): + """Sets the contents of the data based on the contents of + 'record`. + + Parameters + ---------- + record : dict + Record being set. Record must be a dictionary with + keys as zero-based stress periods and values as dictionaries + containing the data and metadata. See MFList's set_record + documentation for more information on the format of the values. + autofill : bool + Automatically correct data + check_data : bool + Whether to verify the data + """ + self._set_data_record( + record, + autofill=autofill, + check_data=check_data, + is_record=True, + ) + + def set_data(self, data, key=None, autofill=False): + """Sets the contents of the data at time `key` to `data`. + + Parameters + ---------- + data : dict, recarray, list + Data being set. Data can be a dictionary with keys as + zero-based stress periods and values as the data. If data is + a recarray or list of tuples, it will be assigned to the + stress period specified in `key`. If any is set to None, that + stress period of data will be removed. + key : int + Zero based stress period to assign data too. Does not apply + if `data` is a dictionary. + autofill : bool + Automatically correct data. + """ + self._set_data_record(data, key, autofill) + + def masked_4D_arrays_itr(self): + """Returns list data as an iterator of a masked 4D array.""" + model_grid = self._data_dimensions.get_model_grid() + nper = self._data_dimensions.package_dim.model_dim[ + 0 + ].simulation_time.get_num_stress_periods() + # get the first kper + arrays = self.to_array(kper=0, mask=True) + + if arrays is not None: + # initialize these big arrays + for name, array in arrays.items(): + if model_grid.grid_type() == DiscretizationType.DIS: + m4d = np.zeros( + ( + nper, + model_grid.num_layers(), + model_grid.num_rows(), + model_grid.num_columns(), + ) + ) + m4d[0, :, :, :] = array + for kper in range(1, nper): + arrays = self.to_array(kper=kper, mask=True) + for tname, array in arrays.items(): + if tname == name: + m4d[kper, :, :, :] = array + yield name, m4d + else: + m3d = np.zeros( + ( + nper, + model_grid.num_layers(), + model_grid.num_cells_per_layer(), + ) + ) + m3d[0, :, :] = array + for kper in range(1, nper): + arrays = self.to_array(kper=kper, mask=True) + for tname, array in arrays.items(): + if tname == name: + m3d[kper, :, :] = array + yield name, m3d + + def _set_data_record( + self, + data_record, + key=None, + autofill=False, + check_data=False, + is_record=False, + ): + self._cache_model_grid = True + if isinstance(data_record, dict): + if "filename" not in data_record and "data" not in data_record: + # each item in the dictionary is a list for one stress period + # the dictionary key is the stress period the list is for + del_keys = [] + for key, list_item in data_record.items(): + list_item_record = False + if list_item is None: + self.remove_transient_key(key) + del_keys.append(key) + self.empty_keys[key] = False + elif isinstance(list_item, list) and len(list_item) == 0: + self.empty_keys[key] = True + else: + self.empty_keys[key] = False + if isinstance(list_item, dict): + list_item_record = True + self._set_data_prep(list_item, key) + if list_item_record: + super().set_record(list_item, autofill, check_data) + else: + super().set_data( + list_item, + autofill=autofill, + check_data=check_data, + ) + for key in del_keys: + del data_record[key] + else: + self.empty_keys[key] = False + self._set_data_prep(data_record["data"], key) + super().set_data(data_record, autofill) + else: + if is_record: + comment = ( + "Set record method requires that data_record is a " + "dictionary." + ) + type_, value_, traceback_ = sys.exc_info() + raise MFDataException( + self.structure.get_model(), + self.structure.get_package(), + self._path, + "setting data record", + self.structure.name, + inspect.stack()[0][3], + type_, + value_, + traceback_, + comment, + self._simulation_data.debug, + ) + if key is None: + # search for a key + new_key_index = self.structure.first_non_keyword_index() + if ( + new_key_index is not None + and len(data_record) > new_key_index + ): + key = data_record[new_key_index] + else: + key = 0 + if isinstance(data_record, list) and len(data_record) == 0: + self.empty_keys[key] = True + else: + check = True + if ( + isinstance(data_record, list) + and len(data_record) > 0 + and data_record[0] == "no_check" + ): + # not checking data + check = False + data_record = data_record[1:] + self.empty_keys[key] = False + if data_record is None: + self.remove_transient_key(key) + else: + self._set_data_prep(data_record, key) + super().set_data(data_record, autofill, check_data=check) + self._cache_model_grid = False + + def external_file_name(self, key=0): + """Returns external file name, or None if this is not external data. + + Parameters + ---------- + key : int + Zero based stress period to return data from. + """ + if key in self.empty_keys and self.empty_keys[key]: + return None + else: + self._get_file_entry_prep(key) + return super().external_file_name() + + def write_file_entry( + self, + fd_data_file, + key=0, + ext_file_action=ExtFileAction.copy_relative_paths, + fd_main=None, + ): + """Returns a string containing the data at time `key` formatted for a + MODFLOW 6 file. + + Parameters + ---------- + fd_data_file : file + File to write to + key : int + Zero based stress period to return data from. + ext_file_action : ExtFileAction + How to handle external paths. + + Returns + ------- + file entry : str + + """ + if key in self.empty_keys and self.empty_keys[key]: + return "" + else: + self._get_file_entry_prep(key) + return super().write_file_entry( + fd_data_file, + ext_file_action=ext_file_action, + fd_main=fd_main, + ) + + def get_file_entry( + self, key=0, ext_file_action=ExtFileAction.copy_relative_paths + ): + """Returns a string containing the data at time `key` formatted for a + MODFLOW 6 file. + + Parameters + ---------- + key : int + Zero based stress period to return data from. + ext_file_action : ExtFileAction + How to handle external paths. + + Returns + ------- + file entry : str + + """ + if key in self.empty_keys and self.empty_keys[key]: + return "" + else: + self._get_file_entry_prep(key) + return super()._write_file_entry( + None, ext_file_action=ext_file_action + ) + + def load( + self, + first_line, + file_handle, + block_header, + pre_data_comments=None, + external_file_info=None, + ): + """Loads data from first_line (the first line of data) and open file + file_handle which is pointing to the second line of data. Returns a + tuple with the first item indicating whether all data was read + and the second item being the last line of text read from the file. + + Parameters + ---------- + first_line : str + A string containing the first line of data in this list. + file_handle : file descriptor + A file handle for the data file which points to the second + line of data for this array + block_header : MFBlockHeader + Block header object that contains block header information + for the block containing this data + pre_data_comments : MFComment + Comments immediately prior to the data + external_file_info : list + Contains information about storing files externally + + """ + self._load_prep(block_header) + return super().load( + first_line, + file_handle, + block_header, + pre_data_comments, + external_file_info, + ) + + def append_list_as_record(self, record, key=0): + """Appends the list `data` as a single record in this list's recarray + at time `key`. Assumes `data` has the correct dimensions. + + Parameters + ---------- + record : list + Data to append + key : int + Zero based stress period to append data too. + + """ + self._append_list_as_record_prep(record, key) + super().append_list_as_record(record) + + def update_record(self, record, key_index, key=0): + """Updates a record at index `key_index` and time `key` with the + contents of `record`. If the index does not exist update_record + appends the contents of `record` to this list's recarray. + + Parameters + ---------- + record : list + Record to append + key_index : int + Index to update + key : int + Zero based stress period to append data too + + """ + + self._update_record_prep(key) + super().update_record(record, key_index) + + def _new_storage(self): + return {} + + def _get_storage(self): + if ( + self._current_key is None + or self._current_key not in self._data_storage + ): + return None + return self._data_storage[self._current_key] + + def plot( + self, + key=None, + names=None, + kper=0, + filename_base=None, + file_extension=None, + mflay=None, + **kwargs, + ): + """ + Plot stress period boundary condition (MfList) data for a specified + stress period + + Parameters + ---------- + key : str + MfList dictionary key. (default is None) + names : list + List of names for figure titles. (default is None) + kper : int + MODFLOW zero-based stress period number to return. (default is zero) + filename_base : str + Base file name that will be used to automatically generate file + names for output image files. Plots will be exported as image + files if file_name_base is not None. (default is None) + file_extension : str + Valid matplotlib.pyplot file extension for savefig(). Only used + if filename_base is not None. (default is 'png') + mflay : int + MODFLOW zero-based layer number to return. If None, then all + all layers will be included. (default is None) + **kwargs : dict + axes : list of matplotlib.pyplot.axis + List of matplotlib.pyplot.axis that will be used to plot + data for each layer. If axes=None axes will be generated. + (default is None) + pcolor : bool + Boolean used to determine if matplotlib.pyplot.pcolormesh + plot will be plotted. (default is True) + colorbar : bool + Boolean used to determine if a color bar will be added to + the matplotlib.pyplot.pcolormesh. Only used if pcolor=True. + (default is False) + inactive : bool + Boolean used to determine if a black overlay in inactive + cells in a layer will be displayed. (default is True) + contour : bool + Boolean used to determine if matplotlib.pyplot.contour + plot will be plotted. (default is False) + clabel : bool + Boolean used to determine if matplotlib.pyplot.clabel + will be plotted. Only used if contour=True. (default is False) + grid : bool + Boolean used to determine if the model grid will be plotted + on the figure. (default is False) + masked_values : list + List of unique values to be excluded from the plot. + + Returns + ---------- + out : list + Empty list is returned if filename_base is not None. Otherwise + a list of matplotlib.pyplot.axis is returned. + """ + from ...plot import PlotUtilities + + if not self.plottable: + raise TypeError("Simulation level packages are not plottable") + + # model.plot() will not work for a mf6 model oc package unless + # this check is here + if self.get_data() is None: + return + + if "cellid" not in self.dtype.names: + return + + axes = PlotUtilities._plot_mflist_helper( + self, + key=key, + names=names, + kper=kper, + filename_base=filename_base, + file_extension=file_extension, + mflay=mflay, + **kwargs, + ) + return axes diff --git a/flopy/mf6/data/mfdatascalar.py b/flopy/mf6/data/mfdatascalar.py index 622754f560..88db11d061 100644 --- a/flopy/mf6/data/mfdatascalar.py +++ b/flopy/mf6/data/mfdatascalar.py @@ -737,7 +737,6 @@ def __init__( path=path, dimensions=dimensions, ) - self._transient_setup(self._data_storage) self.repeating = True @property diff --git a/flopy/mf6/data/mfdatastorage.py b/flopy/mf6/data/mfdatastorage.py index c74fd31d62..fa0836076e 100644 --- a/flopy/mf6/data/mfdatastorage.py +++ b/flopy/mf6/data/mfdatastorage.py @@ -2168,136 +2168,28 @@ def process_internal_line(self, arr_line): return multiplier, print_format def process_open_close_line(self, arr_line, layer, store=True): - # process open/close line - index = 2 - if self._data_type == DatumType.integer: - multiplier = 1 - else: - multiplier = 1.0 - print_format = None - binary = False - data_file = None - data = None - data_dim = self.data_dimensions - if isinstance(arr_line, list): - if len(arr_line) < 2 and store: - message = ( - 'Data array "{}" contains a OPEN/CLOSE ' - "that is not followed by a file. {}".format( - data_dim.structure.name, data_dim.structure.path - ) - ) - type_, value_, traceback_ = sys.exc_info() - raise MFDataException( - self.data_dimensions.structure.get_model(), - self.data_dimensions.structure.get_package(), - self.data_dimensions.structure.path, - "processing open/close line", - data_dim.structure.name, - inspect.stack()[0][3], - type_, - value_, - traceback_, - message, - self._simulation_data.debug, - ) - while index < len(arr_line): - if isinstance(arr_line[index], str): - word = arr_line[index].lower() - if word == "factor" and index + 1 < len(arr_line): - try: - multiplier = convert_data( - arr_line[index + 1], - self.data_dimensions, - self._data_type, - ) - except Exception as ex: - message = ( - "Data array {} contains an OPEN/CLOSE " - "with an invalid multiplier following " - 'the "factor" keyword.' - ".".format(data_dim.structure.name) - ) - type_, value_, traceback_ = sys.exc_info() - raise MFDataException( - self.data_dimensions.structure.get_model(), - self.data_dimensions.structure.get_package(), - self.data_dimensions.structure.path, - "processing open/close line", - data_dim.structure.name, - inspect.stack()[0][3], - type_, - value_, - traceback_, - message, - self._simulation_data.debug, - ex, - ) - index += 2 - elif word == "iprn" and index + 1 < len(arr_line): - print_format = arr_line[index + 1] - index += 2 - elif word == "data" and index + 1 < len(arr_line): - data = arr_line[index + 1] - index += 2 - elif word == "binary" or word == "(binary)": - binary = True - index += 1 - else: - break - else: - break - # save comments - if index < len(arr_line): - self.layer_storage[layer].comments = MFComment( - " ".join(arr_line[index:]), - self.data_dimensions.structure.path, - self._simulation_data, - layer, - ) - if arr_line[0].lower() == "open/close": - data_file = clean_filename(arr_line[1]) - else: - data_file = clean_filename(arr_line[0]) - elif isinstance(arr_line, dict): - for key, value in arr_line.items(): - if key.lower() == "factor": - try: - multiplier = convert_data( - value, self.data_dimensions, self._data_type - ) - except Exception as ex: - message = ( - "Data array {} contains an OPEN/CLOSE " - "with an invalid factor following the " - '"factor" keyword.' - ".".format(data_dim.structure.name) - ) - type_, value_, traceback_ = sys.exc_info() - raise MFDataException( - self.data_dimensions.structure.get_model(), - self.data_dimensions.structure.get_package(), - self.data_dimensions.structure.path, - "processing open/close line", - data_dim.structure.name, - inspect.stack()[0][3], - type_, - value_, - traceback_, - message, - self._simulation_data.debug, - ex, - ) - if key.lower() == "iprn": - print_format = value - if key.lower() == "binary": - binary = bool(value) - if key.lower() == "data": - data = value - if "filename" in arr_line: - data_file = clean_filename(arr_line["filename"]) - + ( + multiplier, + print_format, + binary, + data_file, + data, + comment, + ) = mfdatautil.process_open_close_line( + arr_line, + data_dim, + self._data_type, + self._simulation_data.debug, + store, + ) + if comment is not None: + self.layer_storage[layer].comments = MFComment( + comment, + self.data_dimensions.structure.path, + self._simulation_data, + layer, + ) if data_file is None: message = ( "Data array {} contains an OPEN/CLOSE without a " diff --git a/flopy/mf6/data/mfdatautil.py b/flopy/mf6/data/mfdatautil.py index 00b4203ef8..bf9f571749 100644 --- a/flopy/mf6/data/mfdatautil.py +++ b/flopy/mf6/data/mfdatautil.py @@ -6,7 +6,7 @@ import numpy as np -from ...utils.datautil import DatumUtil, PyListUtil +from ...utils.datautil import DatumUtil, PyListUtil, clean_filename from ..mfbase import FlopyException, MFDataException from .mfstructure import DatumType @@ -137,6 +137,214 @@ def convert_data(data, data_dimensions, data_type, data_item=None, sub_amt=1): return data +def list_to_array(sarr, model_grid, kper=0, mask=False): + """Convert stress period boundary condition (MFDataList) data for a + specified stress period to a 3-D numpy array. + + Parameters + ---------- + sarr : recarray or list + list data to convert to array + model_grid : ModelGrid + model grid object for data + kper : int + MODFLOW zero-based stress period number to return. (default is + zero) + mask : bool + return array with np.NaN instead of zero + + Returns + ---------- + out : dict of numpy.ndarrays + Dictionary of 3-D numpy arrays containing the stress period data + for a selected stress period. The dictionary keys are the + MFDataList dtype names for the stress period data.""" + i0 = 1 + if not isinstance(sarr, list): + sarr = [sarr] + if len(sarr) == 0 or sarr[0] is None: + return None + if "inode" in sarr[0].dtype.names: + raise NotImplementedError() + arrays = {} + + if model_grid._grid_type.value == 1: + shape = ( + model_grid.num_layers(), + model_grid.num_rows(), + model_grid.num_columns(), + ) + elif model_grid._grid_type.value == 2: + shape = ( + model_grid.num_layers(), + model_grid.num_cells_per_layer(), + ) + else: + shape = (model_grid.num_cells_per_layer(),) + + for name in sarr[0].dtype.names[i0:]: + if not sarr[0].dtype.fields[name][0] == object: + arr = np.zeros(shape) + arrays[name] = arr.copy() + + if np.isscalar(sarr[0]): + # if there are no entries for this kper + if sarr[0] == 0: + if mask: + for name, arr in arrays.items(): + arrays[name][:] = np.NaN + return arrays + else: + raise Exception("MfList: something bad happened") + + for name, arr in arrays.items(): + cnt = np.zeros(shape, dtype=np.float64) + for sp_rec in sarr: + if sp_rec is not None: + for rec in sp_rec: + arr[rec["cellid"]] += rec[name] + cnt[rec["cellid"]] += 1.0 + # average keys that should not be added + if name != "cond" and name != "flux": + idx = cnt > 0.0 + arr[idx] /= cnt[idx] + if mask: + arr = np.ma.masked_where(cnt == 0.0, arr) + arr[cnt == 0.0] = np.NaN + + arrays[name] = arr.copy() + return arrays + + +def process_open_close_line( + arr_line, data_dim, data_type, sim_data, store=True +): + # process open/close line + index = 2 + if data_type == DatumType.integer: + multiplier = 1 + else: + multiplier = 1.0 + print_format = None + binary = False + data_file = None + data = None + comment = None + + if isinstance(arr_line, list): + if len(arr_line) < 2 and store: + message = ( + 'Data array "{}" contains a OPEN/CLOSE ' + "that is not followed by a file. {}".format( + data_dim.structure.name, data_dim.structure.path + ) + ) + type_, value_, traceback_ = sys.exc_info() + raise MFDataException( + data_dim.structure.get_model(), + data_dim.structure.get_package(), + data_dim.structure.path, + "processing open/close line", + data_dim.structure.name, + inspect.stack()[0][3], + type_, + value_, + traceback_, + message, + debug, + ) + while index < len(arr_line): + if isinstance(arr_line[index], str): + word = arr_line[index].lower() + if word == "factor" and index + 1 < len(arr_line): + try: + multiplier = convert_data( + arr_line[index + 1], + data_dim, + data_type, + ) + except Exception as ex: + message = ( + "Data array {} contains an OPEN/CLOSE " + "with an invalid multiplier following " + 'the "factor" keyword.' + ".".format(data_dim.structure.name) + ) + type_, value_, traceback_ = sys.exc_info() + raise MFDataException( + data_dim.structure.get_model(), + data_dim.structure.get_package(), + data_dim.structure.path, + "processing open/close line", + data_dim.structure.name, + inspect.stack()[0][3], + type_, + value_, + traceback_, + message, + sim_data.debug, + ex, + ) + index += 2 + elif word == "iprn" and index + 1 < len(arr_line): + print_format = arr_line[index + 1] + index += 2 + elif word == "data" and index + 1 < len(arr_line): + data = arr_line[index + 1] + index += 2 + elif word == "binary" or word == "(binary)": + binary = True + index += 1 + else: + break + else: + break + # save comments + if index < len(arr_line): + comment = " ".join(arr_line[index:]) + if arr_line[0].lower() == "open/close": + data_file = clean_filename(arr_line[1]) + else: + data_file = clean_filename(arr_line[0]) + elif isinstance(arr_line, dict): + for key, value in arr_line.items(): + if key.lower() == "factor": + try: + multiplier = convert_data(value, data_dim, data_type) + except Exception as ex: + message = ( + "Data array {} contains an OPEN/CLOSE " + "with an invalid factor following the " + '"factor" keyword.' + ".".format(data_dim.structure.name) + ) + type_, value_, traceback_ = sys.exc_info() + raise MFDataException( + data_dim.structure.get_model(), + data_dim.structure.get_package(), + data_dim.structure.path, + "processing open/close line", + data_dim.structure.name, + inspect.stack()[0][3], + type_, + value_, + traceback_, + message, + sim_data.debug, + ex, + ) + if key.lower() == "iprn": + print_format = value + if key.lower() == "binary": + binary = bool(value) + if key.lower() == "data": + data = value + if "filename" in arr_line: + data_file = clean_filename(arr_line["filename"]) + + return multiplier, print_format, binary, data_file, data, comment + + def to_string( val, data_type, diff --git a/flopy/mf6/data/mffileaccess.py b/flopy/mf6/data/mffileaccess.py index bfadbde27d..0e109e75bb 100644 --- a/flopy/mf6/data/mffileaccess.py +++ b/flopy/mf6/data/mffileaccess.py @@ -1035,13 +1035,15 @@ def __init__( self.simple_line = False def read_binary_data_from_file( - self, read_file, modelgrid, precision="double" + self, read_file, modelgrid, precision="double", build_cellid=True ): # read from file header, int_cellid_indexes, ext_cellid_indexes = self._get_header( modelgrid, precision ) file_array = np.fromfile(read_file, dtype=header, count=-1) + if not build_cellid: + return file_array # build data list for recarray cellid_size = len(self._get_cell_header(modelgrid)) data_list = [] @@ -1134,6 +1136,9 @@ def load_from_package( self._last_line_info = [] self._data_line = None + if first_line is None: + first_line = file_handle.readline() + # read in any pre data comments current_line = self._read_pre_data_comments( first_line, file_handle, pre_data_comments, storage diff --git a/flopy/mf6/data/mfstructure.py b/flopy/mf6/data/mfstructure.py index 5ad263c480..e194f14b66 100644 --- a/flopy/mf6/data/mfstructure.py +++ b/flopy/mf6/data/mfstructure.py @@ -209,8 +209,11 @@ def get_block_structure_dict(self, path, common, model_file, block_parent): # get header dict header_dict = {} for item in self.dfn_list[0]: - if item == "multi-package": - header_dict["multi-package"] = True + if isinstance(item, str): + if item == "multi-package": + header_dict["multi-package"] = True + if item.startswith("package-type"): + header_dict["package-type"] = item.split(" ")[1] for dfn_entry in self.dfn_list[1:]: # load next data item new_data_item_struct = MFDataItemStructure() @@ -508,6 +511,8 @@ def get_block_structure_dict(self, path, common, model_file, block_parent): line_lst[3], line_lst[4], ] + elif len(line_lst) > 2 and line_lst[1] == "package-type": + header_dict["package-type"] = line_lst[2] # load file definitions for line in dfn_fp: if self._valid_line(line): @@ -1454,6 +1459,29 @@ def __init__(self, data_item, model_data, package_type, dfn_list): self.expected_data_items ) + @property + def basic_item(self): + if not self.parent_block.parent_package.stress_package: + return False + for item in self.data_item_structures: + if ( + ( + (item.repeating or item.optional) + and not ( + item.is_cellid or item.is_aux or item.is_boundname + ) + ) + or item.jagged_array is not None + or item.type == DatumType.keystring + or item.type == DatumType.keyword + or ( + item.description is not None + and "keyword `NONE'" in item.description + ) + ): + return False + return True + @property def is_mname(self): for item in self.data_item_structures: @@ -2109,6 +2137,14 @@ def __init__(self, dfn_file, path, common, model_file): self.has_packagedata = "packagedata" in self.blocks self.has_perioddata = "period" in self.blocks self.multi_package_support = "multi-package" in self.header + self.stress_package = ( + "package-type" in self.header + and self.header["package-type"] == "stress-package" + ) + self.advanced_stress_package = ( + "package-type" in self.header + and self.header["package-type"] == "advanced-stress-package" + ) self.dfn_list = dfn_file.dfn_list self.sub_package = self._sub_package() diff --git a/flopy/mf6/mfpackage.py b/flopy/mf6/mfpackage.py index ed5bcb8e84..f25900614c 100644 --- a/flopy/mf6/mfpackage.py +++ b/flopy/mf6/mfpackage.py @@ -4,7 +4,6 @@ import inspect import os import sys -from re import S import numpy as np @@ -14,7 +13,14 @@ from ..utils.check import mf6check from ..version import __version__ from .coordinates import modeldimensions -from .data import mfdata, mfdataarray, mfdatalist, mfdatascalar, mfstructure +from .data import ( + mfdata, + mfdataarray, + mfdatalist, + mfdataplist, + mfdatascalar, + mfstructure, +) from .data.mfdatautil import DataSearchOutput, MFComment, cellids_equal from .data.mfstructure import DatumType, MFDataItemStructure, MFStructure from .mfbase import ( @@ -466,28 +472,57 @@ def data_factory( trans_array.set_data(data, key=0) return trans_array elif data_type == mfstructure.DataType.list: - return mfdatalist.MFList( - sim_data, - model_or_sim, - structure, - data, - enable, - path, - dimensions, - package, - self, - ) + if ( + structure.basic_item + and self._container_package.package_type.lower() != "nam" + and self._simulation_data.use_pandas + ): + return mfdataplist.MFPandasList( + sim_data, + model_or_sim, + structure, + data, + enable, + path, + dimensions, + package, + self, + ) + else: + return mfdatalist.MFList( + sim_data, + model_or_sim, + structure, + data, + enable, + path, + dimensions, + package, + self, + ) elif data_type == mfstructure.DataType.list_transient: - trans_list = mfdatalist.MFTransientList( - sim_data, - model_or_sim, - structure, - enable, - path, - dimensions, - package, - self, - ) + if structure.basic_item and self._simulation_data.use_pandas: + trans_list = mfdataplist.MFPandasTransientList( + sim_data, + model_or_sim, + structure, + enable, + path, + dimensions, + package, + self, + ) + else: + trans_list = mfdatalist.MFTransientList( + sim_data, + model_or_sim, + structure, + enable, + path, + dimensions, + package, + self, + ) if data is not None: trans_list.set_data(data, key=0, autofill=True) return trans_list @@ -1300,7 +1335,10 @@ def set_all_data_external( if ( isinstance(dataset, mfdataarray.MFArray) or ( - isinstance(dataset, mfdatalist.MFList) + ( + isinstance(dataset, mfdatalist.MFList) + or isinstance(dataset, mfdataplist.MFPandasList) + ) and dataset.structure.type == DatumType.recarray ) and dataset.enabled @@ -1347,7 +1385,10 @@ def set_all_data_internal(self, check_data=True): if ( isinstance(dataset, mfdataarray.MFArray) or ( - isinstance(dataset, mfdatalist.MFList) + ( + isinstance(dataset, mfdatalist.MFList) + or isinstance(dataset, mfdataplist.MFPandasList) + ) and dataset.structure.type == DatumType.recarray ) and dataset.enabled @@ -1361,8 +1402,37 @@ def _find_repeating_datasets(self): repeating_datasets.append(dataset) return repeating_datasets + def _prepare_external(self, fd, file_name, binary=False): + fd_main = fd + fd_path = self._simulation_data.mfpath.get_model_path(self.path[0]) + # resolve full file and folder path + fd_file_path = os.path.join(fd_path, file_name) + fd_folder_path = os.path.split(fd_file_path)[0] + if fd_folder_path != "": + if not os.path.exists(fd_folder_path): + # create new external data folder + os.makedirs(fd_folder_path) + return fd_main, fd_file_path + def _write_block(self, fd, block_header, ext_file_action): transient_key = None + basic_list = False + dataset_one = list(self.datasets.values())[0] + if isinstance( + dataset_one, + (mfdataplist.MFPandasList, mfdataplist.MFPandasTransientList), + ): + basic_list = True + for dataset in self.datasets.values(): + assert isinstance( + dataset, + ( + mfdataplist.MFPandasList, + mfdataplist.MFPandasTransientList, + ), + ) + # write block header + block_header.write_header(fd) if len(block_header.data_items) > 0: transient_key = block_header.get_transient_key() @@ -1379,9 +1449,25 @@ def _write_block(self, fd, block_header, ext_file_action): print( f" writing data {dataset.structure.name}..." ) - data_set_output.append( - dataset.get_file_entry(ext_file_action=ext_file_action) - ) + if basic_list: + ext_fname = dataset.external_file_name() + if ext_fname is not None: + # if dataset.has_modified_ext_data(): + binary = dataset.binary_ext_data() + # write block contents to external file + fd_main, fd = self._prepare_external( + fd, ext_fname, binary + ) + dataset.write_file_entry(fd, fd_main=fd_main) + fd = fd_main + else: + dataset.write_file_entry(fd) + else: + data_set_output.append( + dataset.get_file_entry( + ext_file_action=ext_file_action + ) + ) data_found = True else: if ( @@ -1392,20 +1478,43 @@ def _write_block(self, fd, block_header, ext_file_action): " writing data {} ({}).." ".".format(dataset.structure.name, transient_key) ) - if dataset.repeating: - output = dataset.get_file_entry( - transient_key, ext_file_action=ext_file_action - ) - if output is not None: - data_set_output.append(output) - data_found = True + if basic_list: + ext_fname = dataset.external_file_name(transient_key) + if ext_fname is not None: + # if dataset.has_modified_ext_data(transient_key): + binary = dataset.binary_ext_data(transient_key) + # write block contents to external file + fd_main, fd = self._prepare_external( + fd, ext_fname, binary + ) + dataset.write_file_entry( + fd, + transient_key, + ext_file_action=ext_file_action, + fd_main=fd_main, + ) + fd = fd_main + else: + dataset.write_file_entry( + fd, + transient_key, + ext_file_action=ext_file_action, + ) else: - data_set_output.append( - dataset.get_file_entry( - ext_file_action=ext_file_action + if dataset.repeating: + output = dataset.get_file_entry( + transient_key, ext_file_action=ext_file_action ) - ) - data_found = True + if output is not None: + data_set_output.append(output) + data_found = True + else: + data_set_output.append( + dataset.get_file_entry( + ext_file_action=ext_file_action + ) + ) + data_found = True except MFDataException as mfde: raise MFDataException( mfdata_except=mfde, @@ -1419,46 +1528,30 @@ def _write_block(self, fd, block_header, ext_file_action): ) if not data_found: return - # write block header - block_header.write_header(fd) - - if self.external_file_name is not None: - # write block contents to external file - indent_string = self._simulation_data.indent_string - fd.write(f"{indent_string}open/close {self.external_file_name}\n") - fd_main = fd - fd_path = os.path.split(os.path.realpath(fd.name))[0] - try: - fd = open(os.path.join(fd_path, self.external_file_name), "w") - except: - type_, value_, traceback_ = sys.exc_info() - message = ( - f'Error reading external file "{self.external_file_name}"' + if not basic_list: + # write block header + block_header.write_header(fd) + + if self.external_file_name is not None: + indent_string = self._simulation_data.indent_string + fd.write( + f"{indent_string}open/close " + f'"{self.external_file_name}"\n' ) - raise MFDataException( - self._container_package.model_name, - self._container_package._get_pname(), - self.path, - "reading external file", - self.structure.name, - inspect.stack()[0][3], - type_, - value_, - traceback_, - message, - self._simulation_data.debug, + # write block contents to external file + fd_main, fd = self._prepare_external( + fd, self.external_file_name ) - - # write data sets - for output in data_set_output: - fd.write(output) + # write data sets + for output in data_set_output: + fd.write(output) # write trailing comments pth = block_header.blk_trailing_comment_path if pth in self._simulation_data.mfdata: self._simulation_data.mfdata[pth].write(fd) - if self.external_file_name is not None: + if self.external_file_name is not None and not basic_list: # switch back writing to package file fd.close() fd = fd_main @@ -1975,10 +2068,12 @@ def check(self, f=None, verbose=True, level=1, checktype=None): for row in data: row_size = len(row) aux_start_loc = ( - row_size - num_aux_names - offset + row_size - num_aux_names - offset - 1 ) # loop through auxiliary variables - for idx, var in enumerate(aux_names): + for idx, var in enumerate( + list(aux_names[0])[1:] + ): # get index of current aux variable data_index = aux_start_loc + idx # verify auxiliary value is either diff --git a/flopy/mf6/mfsimbase.py b/flopy/mf6/mfsimbase.py index 264f35fdbd..f5f806d700 100644 --- a/flopy/mf6/mfsimbase.py +++ b/flopy/mf6/mfsimbase.py @@ -257,6 +257,7 @@ def __init__(self, path: Union[str, os.PathLike], mfsim): self.verbosity_level = VerbosityLevel.normal self.max_columns_user_set = False self.max_columns_auto_set = False + self.use_pandas = True self._update_str_format() @@ -430,6 +431,8 @@ class MFSimulationBase(PackageContainer): and only writes external data if the data has changed. This option automatically overrides the verify_data and auto_set_sizes, turning both off. + use_pandas: bool + Load/save data using pandas dataframes (for supported data) Examples -------- >>> s = MFSimulationBase.load('my simulation', 'simulation.nam') @@ -455,12 +458,14 @@ def __init__( memory_print_option=None, write_headers=True, lazy_io=False, + use_pandas=True, ): super().__init__(MFSimulationData(sim_ws, self), sim_name) self.simulation_data.verbosity_level = self._resolve_verbosity_level( verbosity_level ) self.simulation_data.write_headers = write_headers + self.simulation_data.use_pandas = use_pandas if lazy_io: self.simulation_data.lazy_io = True @@ -686,6 +691,7 @@ def load( verify_data=False, write_headers=True, lazy_io=False, + use_pandas=True, ): """ Load an existing model. Do not call this method directly. Should only @@ -729,6 +735,9 @@ def load( and only writes external data if the data has changed. This option automatically overrides the verify_data and auto_set_sizes, turning both off. + use_pandas: bool + Load/save data using pandas dataframes (for supported data) + Returns ------- sim : MFSimulation object @@ -746,6 +755,7 @@ def load( sim_ws, verbosity_level, write_headers=write_headers, + use_pandas=use_pandas, ) verbosity_level = instance.simulation_data.verbosity_level diff --git a/flopy/mf6/modflow/mfgwfchd.py b/flopy/mf6/modflow/mfgwfchd.py index b1d018e026..c2a458e7de 100644 --- a/flopy/mf6/modflow/mfgwfchd.py +++ b/flopy/mf6/modflow/mfgwfchd.py @@ -1,6 +1,6 @@ # DO NOT MODIFY THIS FILE DIRECTLY. THIS FILE MUST BE CREATED BY # mf6/utils/createpackages.py -# FILE created on June 29, 2023 14:20:38 UTC +# FILE created on August 29, 2023 20:06:54 UTC from .. import mfpackage from ..data.mfdatautil import ListTemplateGenerator @@ -117,10 +117,7 @@ class ModflowGwfchd(mfpackage.MFPackage): dfn_file_name = "gwf-chd.dfn" dfn = [ - [ - "header", - "multi-package", - ], + ["header", "multi-package", "package-type stress-package"], [ "block options", "name auxiliary", diff --git a/flopy/mf6/modflow/mfgwfdrn.py b/flopy/mf6/modflow/mfgwfdrn.py index 5e3e68612a..31380cf1c2 100644 --- a/flopy/mf6/modflow/mfgwfdrn.py +++ b/flopy/mf6/modflow/mfgwfdrn.py @@ -1,6 +1,6 @@ # DO NOT MODIFY THIS FILE DIRECTLY. THIS FILE MUST BE CREATED BY # mf6/utils/createpackages.py -# FILE created on June 29, 2023 14:20:38 UTC +# FILE created on August 29, 2023 20:06:54 UTC from .. import mfpackage from ..data.mfdatautil import ListTemplateGenerator @@ -140,10 +140,7 @@ class ModflowGwfdrn(mfpackage.MFPackage): dfn_file_name = "gwf-drn.dfn" dfn = [ - [ - "header", - "multi-package", - ], + ["header", "multi-package", "package-type stress-package"], [ "block options", "name auxiliary", diff --git a/flopy/mf6/modflow/mfgwfevt.py b/flopy/mf6/modflow/mfgwfevt.py index fcc407e9c3..2fc5adaa4f 100644 --- a/flopy/mf6/modflow/mfgwfevt.py +++ b/flopy/mf6/modflow/mfgwfevt.py @@ -1,6 +1,6 @@ # DO NOT MODIFY THIS FILE DIRECTLY. THIS FILE MUST BE CREATED BY # mf6/utils/createpackages.py -# FILE created on June 29, 2023 14:20:38 UTC +# FILE created on August 29, 2023 20:06:54 UTC from .. import mfpackage from ..data.mfdatautil import ListTemplateGenerator @@ -160,10 +160,7 @@ class ModflowGwfevt(mfpackage.MFPackage): dfn_file_name = "gwf-evt.dfn" dfn = [ - [ - "header", - "multi-package", - ], + ["header", "multi-package", "package-type stress-package"], [ "block options", "name fixed_cell", diff --git a/flopy/mf6/modflow/mfgwfevta.py b/flopy/mf6/modflow/mfgwfevta.py index eb74556e7a..691ae9e9f7 100644 --- a/flopy/mf6/modflow/mfgwfevta.py +++ b/flopy/mf6/modflow/mfgwfevta.py @@ -1,6 +1,6 @@ # DO NOT MODIFY THIS FILE DIRECTLY. THIS FILE MUST BE CREATED BY # mf6/utils/createpackages.py -# FILE created on June 29, 2023 14:20:38 UTC +# FILE created on August 29, 2023 20:06:54 UTC from .. import mfpackage from ..data.mfdatautil import ArrayTemplateGenerator, ListTemplateGenerator @@ -118,10 +118,7 @@ class ModflowGwfevta(mfpackage.MFPackage): dfn_file_name = "gwf-evta.dfn" dfn = [ - [ - "header", - "multi-package", - ], + ["header", "multi-package", "package-type stress-package"], [ "block options", "name readasarrays", diff --git a/flopy/mf6/modflow/mfgwfghb.py b/flopy/mf6/modflow/mfgwfghb.py index 0bc28e4207..787d9ec12e 100644 --- a/flopy/mf6/modflow/mfgwfghb.py +++ b/flopy/mf6/modflow/mfgwfghb.py @@ -1,6 +1,6 @@ # DO NOT MODIFY THIS FILE DIRECTLY. THIS FILE MUST BE CREATED BY # mf6/utils/createpackages.py -# FILE created on June 29, 2023 14:20:38 UTC +# FILE created on August 29, 2023 20:06:54 UTC from .. import mfpackage from ..data.mfdatautil import ListTemplateGenerator @@ -128,10 +128,7 @@ class ModflowGwfghb(mfpackage.MFPackage): dfn_file_name = "gwf-ghb.dfn" dfn = [ - [ - "header", - "multi-package", - ], + ["header", "multi-package", "package-type stress-package"], [ "block options", "name auxiliary", diff --git a/flopy/mf6/modflow/mfgwflak.py b/flopy/mf6/modflow/mfgwflak.py index dc01e74385..6d060ff8d2 100644 --- a/flopy/mf6/modflow/mfgwflak.py +++ b/flopy/mf6/modflow/mfgwflak.py @@ -469,10 +469,7 @@ class ModflowGwflak(mfpackage.MFPackage): dfn_file_name = "gwf-lak.dfn" dfn = [ - [ - "header", - "multi-package", - ], + ["header", "multi-package", "package-type advanced-stress-package"], [ "block options", "name auxiliary", diff --git a/flopy/mf6/modflow/mfgwfmaw.py b/flopy/mf6/modflow/mfgwfmaw.py index 51ca09dfd9..1eeb143388 100644 --- a/flopy/mf6/modflow/mfgwfmaw.py +++ b/flopy/mf6/modflow/mfgwfmaw.py @@ -395,10 +395,7 @@ class ModflowGwfmaw(mfpackage.MFPackage): dfn_file_name = "gwf-maw.dfn" dfn = [ - [ - "header", - "multi-package", - ], + ["header", "multi-package", "package-type advanced-stress-package"], [ "block options", "name auxiliary", diff --git a/flopy/mf6/modflow/mfgwfrch.py b/flopy/mf6/modflow/mfgwfrch.py index eeac8c5719..699a9ce9ba 100644 --- a/flopy/mf6/modflow/mfgwfrch.py +++ b/flopy/mf6/modflow/mfgwfrch.py @@ -1,6 +1,6 @@ # DO NOT MODIFY THIS FILE DIRECTLY. THIS FILE MUST BE CREATED BY # mf6/utils/createpackages.py -# FILE created on June 29, 2023 14:20:38 UTC +# FILE created on August 29, 2023 20:06:54 UTC from .. import mfpackage from ..data.mfdatautil import ListTemplateGenerator @@ -123,10 +123,7 @@ class ModflowGwfrch(mfpackage.MFPackage): dfn_file_name = "gwf-rch.dfn" dfn = [ - [ - "header", - "multi-package", - ], + ["header", "multi-package", "package-type stress-package"], [ "block options", "name fixed_cell", diff --git a/flopy/mf6/modflow/mfgwfrcha.py b/flopy/mf6/modflow/mfgwfrcha.py index 35fc60d5fb..1a1a05a894 100644 --- a/flopy/mf6/modflow/mfgwfrcha.py +++ b/flopy/mf6/modflow/mfgwfrcha.py @@ -1,6 +1,6 @@ # DO NOT MODIFY THIS FILE DIRECTLY. THIS FILE MUST BE CREATED BY # mf6/utils/createpackages.py -# FILE created on June 29, 2023 14:20:38 UTC +# FILE created on August 29, 2023 20:06:54 UTC from .. import mfpackage from ..data.mfdatautil import ArrayTemplateGenerator, ListTemplateGenerator @@ -116,10 +116,7 @@ class ModflowGwfrcha(mfpackage.MFPackage): dfn_file_name = "gwf-rcha.dfn" dfn = [ - [ - "header", - "multi-package", - ], + ["header", "multi-package", "package-type stress-package"], [ "block options", "name readasarrays", diff --git a/flopy/mf6/modflow/mfgwfriv.py b/flopy/mf6/modflow/mfgwfriv.py index 56389249b8..e50cf651f6 100644 --- a/flopy/mf6/modflow/mfgwfriv.py +++ b/flopy/mf6/modflow/mfgwfriv.py @@ -1,6 +1,6 @@ # DO NOT MODIFY THIS FILE DIRECTLY. THIS FILE MUST BE CREATED BY # mf6/utils/createpackages.py -# FILE created on June 29, 2023 14:20:38 UTC +# FILE created on August 29, 2023 20:06:54 UTC from .. import mfpackage from ..data.mfdatautil import ListTemplateGenerator @@ -129,10 +129,7 @@ class ModflowGwfriv(mfpackage.MFPackage): dfn_file_name = "gwf-riv.dfn" dfn = [ - [ - "header", - "multi-package", - ], + ["header", "multi-package", "package-type stress-package"], [ "block options", "name auxiliary", diff --git a/flopy/mf6/modflow/mfgwfsfr.py b/flopy/mf6/modflow/mfgwfsfr.py index e32fb7f3bd..70b470d143 100644 --- a/flopy/mf6/modflow/mfgwfsfr.py +++ b/flopy/mf6/modflow/mfgwfsfr.py @@ -476,10 +476,7 @@ class ModflowGwfsfr(mfpackage.MFPackage): dfn_file_name = "gwf-sfr.dfn" dfn = [ - [ - "header", - "multi-package", - ], + ["header", "multi-package", "package-type advanced-stress-package"], [ "block options", "name auxiliary", diff --git a/flopy/mf6/modflow/mfgwfuzf.py b/flopy/mf6/modflow/mfgwfuzf.py index dd845bdaa6..dc31038a18 100644 --- a/flopy/mf6/modflow/mfgwfuzf.py +++ b/flopy/mf6/modflow/mfgwfuzf.py @@ -297,10 +297,7 @@ class ModflowGwfuzf(mfpackage.MFPackage): dfn_file_name = "gwf-uzf.dfn" dfn = [ - [ - "header", - "multi-package", - ], + ["header", "multi-package", "package-type advanced-stress-package"], [ "block options", "name auxiliary", diff --git a/flopy/mf6/modflow/mfgwfwel.py b/flopy/mf6/modflow/mfgwfwel.py index 6dee2f1742..33160c324f 100644 --- a/flopy/mf6/modflow/mfgwfwel.py +++ b/flopy/mf6/modflow/mfgwfwel.py @@ -1,6 +1,6 @@ # DO NOT MODIFY THIS FILE DIRECTLY. THIS FILE MUST BE CREATED BY # mf6/utils/createpackages.py -# FILE created on June 29, 2023 14:20:38 UTC +# FILE created on August 29, 2023 20:06:54 UTC from .. import mfpackage from ..data.mfdatautil import ListTemplateGenerator @@ -141,10 +141,7 @@ class ModflowGwfwel(mfpackage.MFPackage): dfn_file_name = "gwf-wel.dfn" dfn = [ - [ - "header", - "multi-package", - ], + ["header", "multi-package", "package-type stress-package"], [ "block options", "name auxiliary", diff --git a/flopy/mf6/modflow/mfsimulation.py b/flopy/mf6/modflow/mfsimulation.py index 28fb749272..0621712c1b 100644 --- a/flopy/mf6/modflow/mfsimulation.py +++ b/flopy/mf6/modflow/mfsimulation.py @@ -74,7 +74,8 @@ class MFSimulation(mfsimbase.MFSimulationBase): load : (sim_name : str, version : string, exe_name : str or PathLike, sim_ws : str or PathLike, strict : bool, verbosity_level : int, load_only : list, verify_data : bool, - write_headers : bool, lazy_io : bool) : MFSimulation + write_headers : bool, lazy_io : bool, use_pandas : bool, + ) : MFSimulation a class method that loads a simulation from files """ @@ -86,6 +87,7 @@ def __init__( sim_ws: Union[str, os.PathLike] = os.curdir, verbosity_level=1, write_headers=True, + use_pandas=True, lazy_io=False, continue_=None, nocheck=None, @@ -101,6 +103,7 @@ def __init__( verbosity_level=verbosity_level, write_headers=write_headers, lazy_io=lazy_io, + use_pandas=use_pandas, ) self.name_file.continue_.set_data(continue_) @@ -128,6 +131,7 @@ def load( verify_data=False, write_headers=True, lazy_io=False, + use_pandas=True, ): return mfsimbase.MFSimulationBase.load( cls, @@ -141,4 +145,5 @@ def load( verify_data, write_headers, lazy_io, + use_pandas, ) diff --git a/flopy/mf6/utils/createpackages.py b/flopy/mf6/utils/createpackages.py index 32b721b3d2..b891e6524a 100644 --- a/flopy/mf6/utils/createpackages.py +++ b/flopy/mf6/utils/createpackages.py @@ -152,6 +152,11 @@ def build_dfn_string(dfn_list, header, package_abbr, flopy_dict): for key, value in header.items(): if key == "multi-package": dfn_string = f'{dfn_string}\n{leading_spaces} "multi-package", ' + if key == "package-type": + dfn_string = ( + f'{dfn_string}\n{leading_spaces} "package-type ' f'{value}"' + ) + # process solution packages if package_abbr in flopy_dict["solution_packages"]: model_types = '", "'.join( @@ -425,7 +430,8 @@ def build_sim_load(): "string,\n exe_name : str or PathLike, " "sim_ws : str or PathLike, strict : bool,\n verbosity_level : " "int, load_only : list, verify_data : bool,\n " - "write_headers : bool, lazy_io : bool) : MFSimulation\n" + "write_headers : bool, lazy_io : bool, use_pandas : bool,\n " + ") : MFSimulation\n" " a class method that loads a simulation from files" '\n """' ) @@ -437,14 +443,14 @@ def build_sim_load(): "sim_ws: Union[str, os.PathLike] = os.curdir,\n " "strict=True, verbosity_level=1, load_only=None,\n " "verify_data=False, write_headers=True,\n " - "lazy_io=False,):\n " + "lazy_io=False, use_pandas=True):\n " "return mfsimbase.MFSimulationBase.load(cls, sim_name, version, " "\n " "exe_name, sim_ws, strict,\n" - " verbosity_level, " + " verbosity_level, " "load_only,\n " "verify_data, write_headers, " - "\n lazy_io)" + "\n lazy_io, use_pandas)" "\n" ) return sim_load, sim_load_c @@ -996,6 +1002,7 @@ def create_packages(): init_vars = build_model_init_vars(options_param_list) options_param_list.insert(0, "lazy_io=False") + options_param_list.insert(0, "use_pandas=True") options_param_list.insert(0, "write_headers=True") options_param_list.insert(0, "verbosity_level=1") options_param_list.insert( @@ -1033,6 +1040,7 @@ def create_packages(): "verbosity_level=verbosity_level,\n{}" "write_headers=write_headers,\n{}" "lazy_io=lazy_io,\n{}" + "use_pandas=use_pandas,\n{}" ")\n".format( sparent_init_string, spaces, @@ -1042,6 +1050,7 @@ def create_packages(): spaces, spaces, spaces, + spaces, ) ) sim_import_string = ( diff --git a/flopy/mf6/utils/model_splitter.py b/flopy/mf6/utils/model_splitter.py index 3f36bc53b0..46ecc4e768 100644 --- a/flopy/mf6/utils/model_splitter.py +++ b/flopy/mf6/utils/model_splitter.py @@ -7,7 +7,7 @@ from ...mf6 import modflow from ...plot import plotutil from ...utils import import_optional_dependency -from ..data import mfdataarray, mfdatalist, mfdatascalar +from ..data import mfdataarray, mfdatalist, mfdataplist, mfdatascalar from ..mfbase import PackageContainer OBS_ID1_LUT = { @@ -2890,12 +2890,47 @@ def _remap_package(self, package, ismvr=False): elif isinstance(value, mfdataarray.MFArray): mapped_data = self._remap_array(item, value, mapped_data) - elif isinstance(value, mfdatalist.MFTransientList): + elif isinstance( + value, + ( + mfdatalist.MFTransientList, + mfdataplist.MFPandasTransientList, + ), + ): + if isinstance(value, mfdataplist.MFPandasTransientList): + list_data = mfdatalist.MFTransientList( + value._simulation_data, + value._model_or_sim, + value.structure, + True, + value.path, + value._data_dimensions.package_dim, + value._package, + value._block, + ) + list_data.set_record(value.get_record()) + value = list_data mapped_data = self._remap_transient_list( item, value, mapped_data ) - elif isinstance(value, mfdatalist.MFList): + elif isinstance( + value, (mfdatalist.MFList, mfdataplist.MFPandasList) + ): + if isinstance(value, mfdataplist.MFPandasList): + list_data = mfdatalist.MFList( + value._simulation_data, + value._model_or_sim, + value.structure, + None, + True, + value.path, + value._data_dimensions.package_dim, + value._package, + value._block, + ) + list_data.set_record(value.get_record()) + value = list_data mapped_data = self._remap_mflist(item, value, mapped_data) elif isinstance(value, mfdatascalar.MFScalar): From f77989d33955baa76032d9341226385215e45821 Mon Sep 17 00:00:00 2001 From: scottrp <45947939+scottrp@users.noreply.github.com> Date: Fri, 6 Oct 2023 08:38:09 -0700 Subject: [PATCH 072/101] fix(pandas list): deal with cellids with inconsistent types (#1980) * fix(pandas list): deal with cellids with inconsistent types * feat(pandas list): handle list size of 0 correctly --- autotest/test_binaryfile.py | 2 +- flopy/mf6/data/mfdataplist.py | 137 ++++++++++++++++++++++++++-------- flopy/mf6/mfpackage.py | 2 +- 3 files changed, 109 insertions(+), 32 deletions(-) diff --git a/autotest/test_binaryfile.py b/autotest/test_binaryfile.py index b5cfc65bc0..2a27183289 100644 --- a/autotest/test_binaryfile.py +++ b/autotest/test_binaryfile.py @@ -447,7 +447,7 @@ def mf6_gwf_2sp_st_tr(function_tmpdir): wel = flopy.mf6.ModflowGwfwel( model=gwf, - stress_period_data={0: 0, 1: [[(0, 0, 0), -1]]}, + stress_period_data={0: None, 1: [[(0, 0, 0), -1]]}, ) sto = flopy.mf6.ModflowGwfsto( diff --git a/flopy/mf6/data/mfdataplist.py b/flopy/mf6/data/mfdataplist.py index b0d921bf9d..bdb9858fe8 100644 --- a/flopy/mf6/data/mfdataplist.py +++ b/flopy/mf6/data/mfdataplist.py @@ -391,6 +391,29 @@ def _unique_column_name(data, col_base_name): idx += 1 return col_name + @staticmethod + def _untuple_manually(pdata, loc, new_column_name, column_name, index): + """ + Loop through pandas DataFrame removing tuples from cellid columns. + Used when pandas "insert" method to perform the same task fails. + """ + # build new column list + new_column = [] + for idx, row in pdata.iterrows(): + if isinstance(row[column_name], tuple) or isinstance( + row[column_name], list + ): + new_column.append(row[column_name][index]) + else: + new_column.append(row[column_name]) + + # insert list as new column + pdata.insert( + loc=loc, + column=new_column_name, + value=new_column, + ) + def _untuple_cellids(self, pdata): """ For all cellids in "pdata", convert them to layer, row, column fields and @@ -421,43 +444,97 @@ def _untuple_cellids(self, pdata): for field_idx, column_name in fields_to_correct: # add individual layer/row/column/cell/node columns if isinstance(self._mg, StructuredGrid): - pdata.insert( - loc=field_idx, - column=self._unique_column_name(pdata, "layer"), - value=pdata.apply(lambda x: x[column_name][0], axis=1), - ) - pdata.insert( - loc=field_idx + 1, - column=self._unique_column_name(pdata, "row"), - value=pdata.apply(lambda x: x[column_name][1], axis=1), - ) - pdata.insert( - loc=field_idx + 2, - column=self._unique_column_name(pdata, "column"), - value=pdata.apply(lambda x: x[column_name][2], axis=1), - ) + try: + pdata.insert( + loc=field_idx, + column=self._unique_column_name(pdata, "layer"), + value=pdata.apply(lambda x: x[column_name][0], axis=1), + ) + except (ValueError, TypeError): + self._untuple_manually( + pdata, + field_idx, + self._unique_column_name(pdata, "layer"), + column_name, + 0, + ) + try: + pdata.insert( + loc=field_idx + 1, + column=self._unique_column_name(pdata, "row"), + value=pdata.apply(lambda x: x[column_name][1], axis=1), + ) + except (ValueError, TypeError): + self._untuple_manually( + pdata, + field_idx + 1, + self._unique_column_name(pdata, "row"), + column_name, + 1, + ) + try: + pdata.insert( + loc=field_idx + 2, + column=self._unique_column_name(pdata, "column"), + value=pdata.apply(lambda x: x[column_name][2], axis=1), + ) + except (ValueError, TypeError): + self._untuple_manually( + pdata, + field_idx + 2, + self._unique_column_name(pdata, "column"), + column_name, + 2, + ) elif isinstance(self._mg, VertexGrid): - pdata.insert( - loc=field_idx, - column=self._unique_column_name(pdata, "layer"), - value=pdata.apply(lambda x: x[column_name][0], axis=1), - ) - pdata.insert( - loc=field_idx + 1, - column=self._unique_column_name(pdata, "cell"), - value=pdata.apply(lambda x: x[column_name][1], axis=1), - ) + try: + pdata.insert( + loc=field_idx, + column=self._unique_column_name(pdata, "layer"), + value=pdata.apply(lambda x: x[column_name][0], axis=1), + ) + except (ValueError, TypeError): + self._untuple_manually( + pdata, + field_idx, + self._unique_column_name(pdata, "layer"), + column_name, + 0, + ) + try: + pdata.insert( + loc=field_idx + 1, + column=self._unique_column_name(pdata, "cell"), + value=pdata.apply(lambda x: x[column_name][1], axis=1), + ) + except (ValueError, TypeError): + self._untuple_manually( + pdata, + field_idx + 1, + self._unique_column_name(pdata, "cell"), + column_name, + 1, + ) elif isinstance(self._mg, UnstructuredGrid): if column_name == "node": # fixing a problem where node was specified as a tuple # make sure new column is named properly column_name = "node_2" pdata = pdata.rename(columns={"node": column_name}) - pdata.insert( - loc=field_idx, - column=self._unique_column_name(pdata, "node"), - value=pdata.apply(lambda x: x[column_name][0], axis=1), - ) + try: + pdata.insert( + loc=field_idx, + column=self._unique_column_name(pdata, "node"), + value=pdata.apply(lambda x: x[column_name][0], axis=1), + ) + except (ValueError, TypeError): + self._untuple_manually( + pdata, + field_idx, + self._unique_column_name(pdata, "node"), + column_name, + 0, + ) # remove cellid tuple pdata = pdata.drop(column_name, axis=1) return pdata, len(fields_to_correct) diff --git a/flopy/mf6/mfpackage.py b/flopy/mf6/mfpackage.py index f25900614c..747f1f6919 100644 --- a/flopy/mf6/mfpackage.py +++ b/flopy/mf6/mfpackage.py @@ -2276,7 +2276,7 @@ def _update_size_defs(self): new_size = len(dataset.get_data()) if size_def.get_data() is None: - current_size = 0 + current_size = -1 else: current_size = size_def.get_data() From 6a46a9b75286d7e268d4addc77946246ccb5db56 Mon Sep 17 00:00:00 2001 From: scottrp <45947939+scottrp@users.noreply.github.com> Date: Fri, 6 Oct 2023 11:00:57 -0700 Subject: [PATCH 073/101] feat(pandas list): fix for handling special case where boundname set but not used (#1982) * feat(pandas list): fix for handling special case where boundname set but not used --- flopy/mf6/data/mfdataplist.py | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/flopy/mf6/data/mfdataplist.py b/flopy/mf6/data/mfdataplist.py index bdb9858fe8..508c3412f6 100644 --- a/flopy/mf6/data/mfdataplist.py +++ b/flopy/mf6/data/mfdataplist.py @@ -549,6 +549,12 @@ def _resolve_columns(self, data): if len(data[0]) == len(self._data_item_names): return self._data_item_names, True + if ( + len(data[0]) == len(self._header_names) - 1 + and self._header_names[-1] == "boundname" + ): + return self._header_names[:-1], True + if ( len(data[0]) == len(self._data_item_names) - 1 and self._data_item_names[-1] == "boundname" From f3b94101aefb4f9ffa2c443c84bab5d75cac7e21 Mon Sep 17 00:00:00 2001 From: wpbonelli Date: Fri, 6 Oct 2023 15:16:04 -0400 Subject: [PATCH 074/101] test(modpathfile): test shapefile export with custom fields (#1981) --- autotest/test_modpathfile.py | 63 +++++++++++++++++++++++++++++++++--- 1 file changed, 59 insertions(+), 4 deletions(-) diff --git a/autotest/test_modpathfile.py b/autotest/test_modpathfile.py index 4fe25dc2fd..32a0531eff 100644 --- a/autotest/test_modpathfile.py +++ b/autotest/test_modpathfile.py @@ -1,11 +1,13 @@ import inspect import io import pstats +from itertools import repeat from shutil import copytree import numpy as np +import numpy.lib.recfunctions as rfn import pytest -from modflow_devtools.markers import requires_exe +from modflow_devtools.markers import requires_exe, requires_pkg from flopy.mf6 import ( MFSimulation, @@ -26,7 +28,7 @@ pytestmark = pytest.mark.mf6 -def __create_simulation( +def __create_and_run_simulation( ws, name, nrow, @@ -181,7 +183,7 @@ def get_nodes(locs): @pytest.fixture(scope="module") def mp7_small(module_tmpdir): - return __create_simulation( + return __create_and_run_simulation( ws=module_tmpdir / "mp7_small", name="mp7_small", nper=1, @@ -209,7 +211,7 @@ def mp7_small(module_tmpdir): @pytest.fixture(scope="module") def mp7_large(module_tmpdir): - return __create_simulation( + return __create_and_run_simulation( ws=module_tmpdir / "mp7_large", name="mp7_large", nper=1, @@ -307,3 +309,56 @@ def test_get_destination_endpoint_data( dest_cells=nodew if locations == "well" else nodesr ) ) + + +@pytest.mark.parametrize("longfieldname", [True, False]) +@requires_exe("mf6", "mp7") +@requires_pkg("shapefile", "shapely") +def test_write_shapefile(function_tmpdir, mp7_small, longfieldname): + from shapefile import Reader + + # setup and run model, then copy outputs to function_tmpdir + sim, forward_model_name, _, _, _ = mp7_small + gwf = sim.get_model() + grid = gwf.modelgrid + ws = function_tmpdir / "ws" + copytree(sim.simulation_data.mfpath.get_sim_path(), ws) + + # make sure forward model output exists + forward_path = ws / f"{forward_model_name}.mppth" + assert forward_path.is_file() + + # load pathlines from file + pathline_file = PathlineFile(forward_path) + pathlines = pathline_file.get_alldata() + + # define shapefile path + shp_file = ws / "pathlines.shp" + + # add a column to the pathline recarray + fieldname = "newfield" + ("longname" if longfieldname else "") + fieldval = "x" + pathlines = [ + rfn.append_fields( + pl, fieldname, list(repeat(fieldval, len(pl))), dtypes="|S1" + ) + for pl in pathlines + ] + + # write the pathline recarray to shapefile + pathline_file.write_shapefile( + pathline_data=pathlines, + shpname=shp_file, + one_per_particle=False, + mg=grid, + ) + + # make sure shapefile exists + assert shp_file.is_file() + + # load shapefile + with Reader(shp_file) as reader: + fieldnames = [f[0] for f in reader.fields[1:]] + fieldname = "newfiname_" if longfieldname else fieldname + assert fieldname in fieldnames + assert all(r[fieldname] == fieldval for r in reader.iterRecords()) From 0853ef8b3c7181cd5cc8dd8652408ad27b4eab8c Mon Sep 17 00:00:00 2001 From: wpbonelli Date: Sat, 7 Oct 2023 15:26:45 -0400 Subject: [PATCH 075/101] ci(mf6.yml): include more mf6 autotests (#1983) --- .github/workflows/mf6.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/mf6.yml b/.github/workflows/mf6.yml index e1709ba687..e2c99685f1 100644 --- a/.github/workflows/mf6.yml +++ b/.github/workflows/mf6.yml @@ -87,7 +87,7 @@ jobs: - name: Run tests working-directory: modflow6/autotest run: | - pytest -v --cov=flopy --cov-report=xml --durations=0 -k "test_gw" -n auto + pytest -v --cov=flopy --cov-report=xml --durations=0 -n auto -m "not repo and not regression" - name: Print coverage report before upload working-directory: ./modflow6/autotest From ff9e2247bf99c28a10f72e479d327e7eb3364b22 Mon Sep 17 00:00:00 2001 From: wpbonelli Date: Thu, 12 Oct 2023 13:46:24 -0400 Subject: [PATCH 076/101] test(model_splitter): use tempdirs (#1984) --- autotest/test_model_splitter.py | 132 +++++++++++++++++--------------- 1 file changed, 71 insertions(+), 61 deletions(-) diff --git a/autotest/test_model_splitter.py b/autotest/test_model_splitter.py index e28e809a47..34614cd14e 100644 --- a/autotest/test_model_splitter.py +++ b/autotest/test_model_splitter.py @@ -2,6 +2,7 @@ import pytest from autotest.conftest import get_example_data_path from modflow_devtools.markers import requires_exe, requires_pkg +from modflow_devtools.misc import set_dir import flopy from flopy.mf6 import MFSimulation @@ -254,78 +255,87 @@ def test_control_records(function_tmpdir): ncol = 10 nper = 3 - sim = flopy.mf6.MFSimulation() - ims = flopy.mf6.ModflowIms(sim, complexity="SIMPLE") - - tdis = flopy.mf6.ModflowTdis( - sim, - nper=nper, - perioddata=((1.0, 1, 1.0), (1.0, 1, 1.0), (1.0, 1, 1.0)), - ) + # create base simulation + full_ws = function_tmpdir / "full" + full_ws.mkdir() + with set_dir(full_ws): + sim = flopy.mf6.MFSimulation(full_ws) + ims = flopy.mf6.ModflowIms(sim, complexity="SIMPLE") + + tdis = flopy.mf6.ModflowTdis( + sim, + nper=nper, + perioddata=((1.0, 1, 1.0), (1.0, 1, 1.0), (1.0, 1, 1.0)), + ) - gwf = flopy.mf6.ModflowGwf(sim, save_flows=True) - - botm2 = np.ones((nrow, ncol)) * 20 - dis = flopy.mf6.ModflowGwfdis( - gwf, - nlay=2, - nrow=nrow, - ncol=ncol, - delr=1, - delc=1, - top=35, - botm=[30, botm2], - idomain=1, - ) + gwf = flopy.mf6.ModflowGwf(sim, save_flows=True) + + botm2 = np.ones((nrow, ncol)) * 20 + dis = flopy.mf6.ModflowGwfdis( + gwf, + nlay=2, + nrow=nrow, + ncol=ncol, + delr=1, + delc=1, + top=35, + botm=[30, botm2], + idomain=1, + ) - ic = flopy.mf6.ModflowGwfic(gwf, strt=32) - npf = flopy.mf6.ModflowGwfnpf( - gwf, - k=[ - 1.0, - { - "data": np.ones((10, 10)) * 0.75, - "filename": "k.l2.txt", - "iprn": 1, - "factor": 1, - }, - ], - k33=[ - np.ones((nrow, ncol)), - { - "data": np.ones((nrow, ncol)) * 0.5, - "filename": "k33.l2.bin", - "iprn": 1, - "factor": 1, - "binary": True, - }, - ], - ) + ic = flopy.mf6.ModflowGwfic(gwf, strt=32) + npf = flopy.mf6.ModflowGwfnpf( + gwf, + k=[ + 1.0, + { + "data": np.ones((10, 10)) * 0.75, + "filename": "k.l2.txt", + "iprn": 1, + "factor": 1, + }, + ], + k33=[ + np.ones((nrow, ncol)), + { + "data": np.ones((nrow, ncol)) * 0.5, + "filename": "k33.l2.bin", + "iprn": 1, + "factor": 1, + "binary": True, + }, + ], + ) - wel_rec = [ - ((0, 4, 5), -10), - ] + wel_rec = [ + ((0, 4, 5), -10), + ] - spd = { - 0: wel_rec, - 1: {"data": wel_rec, "filename": "wel.1.txt"}, - 2: {"data": wel_rec, "filename": "wel.2.bin", "binary": True}, - } + spd = { + 0: wel_rec, + 1: {"data": wel_rec, "filename": "wel.1.txt"}, + 2: {"data": wel_rec, "filename": "wel.2.bin", "binary": True}, + } - wel = flopy.mf6.ModflowGwfwel(gwf, stress_period_data=spd) + wel = flopy.mf6.ModflowGwfwel(gwf, stress_period_data=spd) - chd_rec = [] - for cond, j in ((30, 0), (22, 9)): - for i in range(10): - chd_rec.append(((0, i, j), cond)) + chd_rec = [] + for cond, j in ((30, 0), (22, 9)): + for i in range(10): + chd_rec.append(((0, i, j), cond)) - chd = flopy.mf6.ModflowGwfchd(gwf, stress_period_data={0: chd_rec}) + chd = flopy.mf6.ModflowGwfchd(gwf, stress_period_data={0: chd_rec}) + # define splitting array arr = np.zeros((10, 10), dtype=int) arr[0:5, :] = 1 - mfsplit = flopy.mf6.utils.Mf6Splitter(sim) - new_sim = mfsplit.split_model(arr) + # split + split_ws = function_tmpdir / "split" + split_ws.mkdir() + with set_dir(split_ws): + mfsplit = flopy.mf6.utils.Mf6Splitter(sim) + new_sim = mfsplit.split_model(arr) ml1 = new_sim.get_model("model_1") From d0ddf2acfe320b9d7f3aaf5d5f45de1b70620eaf Mon Sep 17 00:00:00 2001 From: wpbonelli Date: Fri, 13 Oct 2023 13:19:53 -0400 Subject: [PATCH 077/101] ci: swap setup-fortran for install-gfortran-action (#1985) --- .github/workflows/mf6.yml | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/.github/workflows/mf6.yml b/.github/workflows/mf6.yml index e2c99685f1..73729330c6 100644 --- a/.github/workflows/mf6.yml +++ b/.github/workflows/mf6.yml @@ -44,8 +44,11 @@ jobs: pip install https://github.com/MODFLOW-USGS/modflowapi/zipball/develop pip install .[test,optional] - - name: Install gfortran - uses: modflowpy/install-gfortran-action@v1 + - name: Setup GNU Fortran + uses: awvwgk/setup-fortran@v1 + with: + compiler: gcc + version: 13 - name: Checkout MODFLOW 6 uses: actions/checkout@v4 From 04da63a114007e047952bafd2b255f215b7d1b84 Mon Sep 17 00:00:00 2001 From: wpbonelli Date: Fri, 13 Oct 2023 16:31:09 -0400 Subject: [PATCH 078/101] ci: fix awvwgk/setup-fortran -> fortran-lang/setup-fortran (#1986) --- .github/workflows/mf6.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/mf6.yml b/.github/workflows/mf6.yml index 73729330c6..1ed9fe26e8 100644 --- a/.github/workflows/mf6.yml +++ b/.github/workflows/mf6.yml @@ -45,7 +45,7 @@ jobs: pip install .[test,optional] - name: Setup GNU Fortran - uses: awvwgk/setup-fortran@v1 + uses: fortran-lang/setup-fortran@v1 with: compiler: gcc version: 13 From 3e176d0d668675ae2372106ffe1efc829debad38 Mon Sep 17 00:00:00 2001 From: wpbonelli Date: Mon, 16 Oct 2023 12:29:19 -0400 Subject: [PATCH 079/101] ci: remove branch filters from push trigger (#1987) --- .github/workflows/commit.yml | 5 ----- .github/workflows/mf6.yml | 5 ----- .github/workflows/rtd.yml | 6 ------ 3 files changed, 16 deletions(-) diff --git a/.github/workflows/commit.yml b/.github/workflows/commit.yml index 5e5163f5ce..7765c4fd6c 100644 --- a/.github/workflows/commit.yml +++ b/.github/workflows/commit.yml @@ -1,11 +1,6 @@ name: FloPy continuous integration on: push: - branches: - - master - - develop - - ci-diagnose* - - notebooks pull_request: branches: - master diff --git a/.github/workflows/mf6.yml b/.github/workflows/mf6.yml index 1ed9fe26e8..b48b02f753 100644 --- a/.github/workflows/mf6.yml +++ b/.github/workflows/mf6.yml @@ -4,11 +4,6 @@ on: schedule: - cron: '0 8 * * *' # run at 8 AM UTC (12 am PST) push: - branches: - - master - - develop - - release* - - ci-diagnose* pull_request: branches: - master diff --git a/.github/workflows/rtd.yml b/.github/workflows/rtd.yml index b092e4b16e..f7e8c3be0d 100644 --- a/.github/workflows/rtd.yml +++ b/.github/workflows/rtd.yml @@ -2,12 +2,6 @@ name: FloPy documentation on: push: - branches: - - master - - develop - - release* - - ci-diagnose* - - notebooks* pull_request: branches: - master From 4ed68a2be1e6b4f607e27c38ce5eab9f66492e3f Mon Sep 17 00:00:00 2001 From: wpbonelli Date: Wed, 18 Oct 2023 16:12:53 -0400 Subject: [PATCH 080/101] refactor(contour_array): add layer param, update docstrings, expand tests (#1975) * add layer param to flopy.export.utils.contour_array() * pop/pass layer kwarg to contour_array() in export_array_contours() * test export_array_contours() disu grids with constant/variable ncpl * test PlotMapView.contour_array() array formats and with/without layer * update PlotMapView.contour_array() docstring (dimension expectations) * refactor some disu test grids & models into fixtures --- autotest/test_export.py | 143 ++++++++++++++++++------- autotest/test_gridgen.py | 27 +++-- autotest/test_plot_map_view.py | 71 +++++++++--- flopy/discretization/structuredgrid.py | 2 +- flopy/discretization/vertexgrid.py | 4 +- flopy/export/utils.py | 9 +- flopy/plot/map.py | 10 +- 7 files changed, 190 insertions(+), 76 deletions(-) diff --git a/autotest/test_export.py b/autotest/test_export.py index c7f84661da..1b5e607bd1 100644 --- a/autotest/test_export.py +++ b/autotest/test_export.py @@ -139,6 +139,44 @@ def disu_sim(name, tmpdir, missing_arrays=False): return sim +@pytest.fixture +def unstructured_grid(example_data_path): + ws = example_data_path / "unstructured" + + # load vertices + verts = load_verts(ws / "ugrid_verts.dat") + + # load the index list into iverts, xc, and yc + iverts, xc, yc = load_iverts(ws / "ugrid_iverts.dat", closed=True) + + # create a 3 layer model grid + ncpl = np.array(3 * [len(iverts)]) + nnodes = np.sum(ncpl) + + top = np.ones(nnodes) + botm = np.ones(nnodes) + + # set top and botm elevations + i0 = 0 + i1 = ncpl[0] + elevs = [100, 0, -100, -200] + for ix, cpl in enumerate(ncpl): + top[i0:i1] *= elevs[ix] + botm[i0:i1] *= elevs[ix + 1] + i0 += cpl + i1 += cpl + + return UnstructuredGrid( + vertices=verts, + iverts=iverts, + xcenters=xc, + ycenters=yc, + top=top, + botm=botm, + ncpl=ncpl, + ) + + @requires_pkg("shapefile") @pytest.mark.parametrize("pathlike", (True, False)) def test_output_helper_shapefile_export( @@ -613,7 +651,7 @@ def test_export_array2(function_tmpdir): @requires_pkg("shapefile", "shapely") -def test_export_array_contours(function_tmpdir): +def test_export_array_contours_structured(function_tmpdir): nrow = 7 ncol = 11 crs = 4431 @@ -648,6 +686,61 @@ def test_export_array_contours(function_tmpdir): assert os.path.isfile(filename), "did not create contour shapefile" +@requires_pkg("shapefile", "shapely") +def test_export_array_contours_unstructured( + function_tmpdir, unstructured_grid +): + from shapefile import Reader + + grid = unstructured_grid + fname = function_tmpdir / "myarraycontours1.shp" + export_array_contours(grid, fname, np.arange(grid.nnodes)) + assert fname.is_file(), "did not create contour shapefile" + + # visual debugging + grid.plot(alpha=0.2) + with Reader(fname) as r: + shapes = r.shapes() + for s in shapes: + x = [i[0] for i in s.points[:]] + y = [i[1] for i in s.points[:]] + plt.plot(x, y) + + # plt.show() + + +from autotest.test_gridgen import sim_disu_diff_layers + + +@requires_pkg("shapefile", "shapely") +def test_export_array_contours_unstructured_diff_layers( + function_tmpdir, sim_disu_diff_layers +): + from shapefile import Reader + + gwf = sim_disu_diff_layers.get_model() + grid = gwf.modelgrid + a = np.arange(grid.nnodes) + for layer in range(3): + fname = function_tmpdir / f"contours.{layer}.shp" + export_array_contours(grid, fname, a, layer=layer) + assert fname.is_file(), "did not create contour shapefile" + + # visual debugging + fig, axes = plt.subplots(1, 3, subplot_kw={"aspect": "equal"}) + for layer, ax in enumerate(axes): + fname = function_tmpdir / f"contours.{layer}.shp" + with Reader(fname) as r: + shapes = r.shapes() + for s in shapes: + x = [i[0] for i in s.points[:]] + y = [i[1] for i in s.points[:]] + ax.plot(x, y) + grid.plot(ax=ax, alpha=0.2, layer=layer) + + # plt.show() + + @requires_pkg("shapefile", "shapely") def test_export_contourf(function_tmpdir, example_data_path): from shapefile import Reader @@ -1331,52 +1424,18 @@ def test_vtk_vector(function_tmpdir, example_data_path): @requires_pkg("vtk") -def test_vtk_unstructured(function_tmpdir, example_data_path): +def test_vtk_unstructured(function_tmpdir, unstructured_grid): from vtkmodules.util.numpy_support import vtk_to_numpy from vtkmodules.vtkIOLegacy import vtkUnstructuredGridReader - u_data_ws = example_data_path / "unstructured" - - # load vertices - verts = load_verts(u_data_ws / "ugrid_verts.dat") - - # load the index list into iverts, xc, and yc - iverts, xc, yc = load_iverts(u_data_ws / "ugrid_iverts.dat", closed=True) - - # create a 3 layer model grid - ncpl = np.array(3 * [len(iverts)]) - nnodes = np.sum(ncpl) - - top = np.ones(nnodes) - botm = np.ones(nnodes) - - # set top and botm elevations - i0 = 0 - i1 = ncpl[0] - elevs = [100, 0, -100, -200] - for ix, cpl in enumerate(ncpl): - top[i0:i1] *= elevs[ix] - botm[i0:i1] *= elevs[ix + 1] - i0 += cpl - i1 += cpl - - # create the modelgrid - modelgrid = UnstructuredGrid( - vertices=verts, - iverts=iverts, - xcenters=xc, - ycenters=yc, - top=top, - botm=botm, - ncpl=ncpl, - ) + grid = unstructured_grid outfile = function_tmpdir / "disu_grid.vtu" vtkobj = Vtk( - modelgrid=modelgrid, vertical_exageration=2, binary=True, smooth=False + modelgrid=grid, vertical_exageration=2, binary=True, smooth=False ) - vtkobj.add_array(modelgrid.top, "top") - vtkobj.add_array(modelgrid.botm, "botm") + vtkobj.add_array(grid.top, "top") + vtkobj.add_array(grid.botm, "botm") vtkobj.write(outfile) assert is_binary_file(outfile) @@ -1390,7 +1449,9 @@ def test_vtk_unstructured(function_tmpdir, example_data_path): top2 = vtk_to_numpy(data.GetCellData().GetArray("top")) - assert np.allclose(np.ravel(top), top2), "Field data not properly written" + assert np.allclose( + np.ravel(grid.top), top2 + ), "Field data not properly written" @requires_pkg("vtk", "pyvista") diff --git a/autotest/test_gridgen.py b/autotest/test_gridgen.py index 0703a274eb..4063e86ff0 100644 --- a/autotest/test_gridgen.py +++ b/autotest/test_gridgen.py @@ -164,13 +164,11 @@ def test_mf6disv(function_tmpdir): plt.close("all") -@pytest.mark.slow -@requires_exe("mf6", "gridgen") -@requires_pkg("shapely", "shapefile") -def test_mf6disu(function_tmpdir): +@pytest.fixture +def sim_disu_diff_layers(function_tmpdir): from shapely.geometry import Polygon - name = "dummy" + name = "disu_diff_layers" nlay = 3 nrow = 10 ncol = 10 @@ -232,6 +230,17 @@ def test_mf6disu(function_tmpdir): head_filerecord=head_file, saverecord=[("HEAD", "ALL"), ("BUDGET", "ALL")], ) + + return sim + + +@pytest.mark.slow +@requires_exe("mf6", "gridgen") +@requires_pkg("shapely", "shapefile") +def test_mf6disu(sim_disu_diff_layers): + sim = sim_disu_diff_layers + ws = sim.sim_path + gwf = sim.get_model() sim.write_simulation() gwf.modelgrid.set_coord_info(angrot=15) @@ -243,9 +252,9 @@ def test_mf6disu(function_tmpdir): assert np.allclose(gwf.modelgrid.ncpl, np.array([436, 184, 112])) # write grid and model shapefiles - fname = function_tmpdir / "grid.shp" + fname = ws / "grid.shp" gwf.modelgrid.write_shapefile(fname) - fname = function_tmpdir / "model.shp" + fname = ws / "model.shp" gwf.export(fname) sim.run_simulation(silent=True) @@ -272,7 +281,7 @@ def test_mf6disu(function_tmpdir): ax.set_title(f"Layer {ilay + 1}") pmv.plot_vector(spdis["qx"], spdis["qy"], color="white") fname = "results.png" - fname = function_tmpdir / fname + fname = ws / fname plt.savefig(fname) plt.close("all") @@ -307,7 +316,7 @@ def test_mf6disu(function_tmpdir): sim_name=name, version="mf6", exe_name="mf6", - sim_ws=function_tmpdir, + sim_ws=ws, ) gwf = sim.get_model(name) diff --git a/autotest/test_plot_map_view.py b/autotest/test_plot_map_view.py index 7c0aa8470b..ae4d2630e6 100644 --- a/autotest/test_plot_map_view.py +++ b/autotest/test_plot_map_view.py @@ -31,6 +31,12 @@ from flopy.utils import CellBudgetFile, EndpointFile, HeadFile, PathlineFile +@pytest.fixture +def rng(): + # set seed so parametrized plot tests are comparable + return np.random.default_rng(0) + + @requires_pkg("shapely") def test_map_view(): m = flopy.modflow.Modflow(rotation=20.0) @@ -162,21 +168,19 @@ def test_map_view_bc_UZF_3lay(example_data_path): ), f"Unexpected collection type: {type(col)}" -def test_map_view_contour(function_tmpdir): - arr = np.random.rand(10, 10) * 100 - arr[-1, :] = np.nan - delc = np.array([10] * 10, dtype=float) - delr = np.array([8] * 10, dtype=float) - top = np.ones((10, 10), dtype=float) - botm = np.ones((3, 10, 10), dtype=float) +@pytest.mark.parametrize("ndim", [1, 2, 3]) +def test_map_view_contour_array_structured(function_tmpdir, ndim, rng): + nlay, nrow, ncol = 3, 10, 10 + ncpl = nrow * ncol + delc = np.array([10] * nrow, dtype=float) + delr = np.array([8] * ncol, dtype=float) + top = np.ones((nrow, ncol), dtype=float) + botm = np.ones((nlay, nrow, ncol), dtype=float) botm[0] = 0.75 botm[1] = 0.5 botm[2] = 0.25 - idomain = np.ones((3, 10, 10)) + idomain = np.ones((nlay, nrow, ncol)) idomain[0, 0, :] = 0 - vmin = np.nanmin(arr) - vmax = np.nanmax(arr) - levels = np.linspace(vmin, vmax, 7) grid = StructuredGrid( delc=delc, @@ -185,16 +189,49 @@ def test_map_view_contour(function_tmpdir): botm=botm, idomain=idomain, lenuni=1, - nlay=3, - nrow=10, - ncol=10, + nlay=nlay, + nrow=nrow, + ncol=ncol, ) - pmv = PlotMapView(modelgrid=grid, layer=0) - contours = pmv.contour_array(a=arr) - plt.savefig(function_tmpdir / "map_view_contour.png") + # define full grid 1D array to contour + arr = rng.random(nlay * nrow * ncol) * 100 + + for l in range(nlay): + if ndim == 1: + # full grid 1D array + pmv = PlotMapView(modelgrid=grid, layer=l) + contours = pmv.contour_array(a=arr) + fname = f"map_view_contour_{ndim}d_l{l}_full.png" + plt.savefig(function_tmpdir / fname) + plt.clf() + + # 1 layer slice + pmv = PlotMapView(modelgrid=grid, layer=l) + contours = pmv.contour_array(a=arr[(l * ncpl) : ((l + 1) * ncpl)]) + fname = f"map_view_contour_{ndim}d_l{l}_1lay.png" + plt.savefig(function_tmpdir / fname) + plt.clf() + elif ndim == 2: + # 1 layer as 2D + # arr[-1, :] = np.nan # add nan to test nan handling + pmv = PlotMapView(modelgrid=grid, layer=l) + contours = pmv.contour_array( + a=arr.reshape(nlay, nrow, ncol)[l, :, :] + ) + plt.savefig(function_tmpdir / f"map_view_contour_{ndim}d_l{l}.png") + plt.clf() + elif ndim == 3: + # full grid as 3D + pmv = PlotMapView(modelgrid=grid, layer=l) + contours = pmv.contour_array(a=arr.reshape(nlay, nrow, ncol)) + plt.savefig(function_tmpdir / f"map_view_contour_{ndim}d_l{l}.png") + plt.clf() # if we ever revert from standard contours to tricontours, restore this nan check + # vmin = np.nanmin(arr) + # vmax = np.nanmax(arr) + # levels = np.linspace(vmin, vmax, 7) # for ix, lev in enumerate(contours.levels): # if not np.allclose(lev, levels[ix]): # raise AssertionError("TriContour NaN catch Failed") diff --git a/flopy/discretization/structuredgrid.py b/flopy/discretization/structuredgrid.py index c7c6246f48..c850fa303d 100644 --- a/flopy/discretization/structuredgrid.py +++ b/flopy/discretization/structuredgrid.py @@ -1637,7 +1637,7 @@ def get_plottable_layer_array(self, a, layer): plotarray = plotarray.reshape(self.shape) plotarray = plotarray[layer, :, :] else: - raise Exception("Array to plot must be of dimension 1, 2, or 3") + raise ValueError("Array to plot must be of dimension 1, 2, or 3") msg = f"{plotarray.shape} /= {required_shape}" assert plotarray.shape == required_shape, msg return plotarray diff --git a/flopy/discretization/vertexgrid.py b/flopy/discretization/vertexgrid.py index 44a9872f57..f22aec2960 100644 --- a/flopy/discretization/vertexgrid.py +++ b/flopy/discretization/vertexgrid.py @@ -528,7 +528,7 @@ def get_plottable_layer_array(self, a, layer): a = np.squeeze(a, axis=1) plotarray = a[layer, :] else: - raise Exception( + raise ValueError( "Array has 3 dimensions so one of them must be of size 1 " "for a VertexGrid." ) @@ -540,7 +540,7 @@ def get_plottable_layer_array(self, a, layer): plotarray = plotarray.reshape(self.nlay, self.ncpl) plotarray = plotarray[layer, :] else: - raise Exception("Array to plot must be of dimension 1 or 2") + raise ValueError("Array to plot must be of dimension 1 or 2") msg = f"{plotarray.shape[0]} /= {required_shape}" assert plotarray.shape == required_shape, msg return plotarray diff --git a/flopy/export/utils.py b/flopy/export/utils.py index 0277e55d43..ff77baadda 100644 --- a/flopy/export/utils.py +++ b/flopy/export/utils.py @@ -1948,14 +1948,15 @@ def export_array_contours( assert nlevels < maxlevels, msg levels = np.arange(imin, imax, interval) ax = plt.subplots()[-1] - ctr = contour_array(modelgrid, ax, a, levels=levels) + layer = kwargs.pop("layer", 0) + ctr = contour_array(modelgrid, ax, a, layer, levels=levels) kwargs["mg"] = modelgrid export_contours(filename, ctr, fieldname, **kwargs) plt.close() -def contour_array(modelgrid, ax, a, **kwargs): +def contour_array(modelgrid, ax, a, layer=0, **kwargs): """ Create a QuadMesh plot of the specified array using pcolormesh @@ -1967,6 +1968,8 @@ def contour_array(modelgrid, ax, a, **kwargs): ax to add the contours a : np.ndarray array to contour + layer : int, optional + layer to contour Returns ------- @@ -1976,7 +1979,7 @@ def contour_array(modelgrid, ax, a, **kwargs): from ..plot import PlotMapView kwargs["ax"] = ax - pmv = PlotMapView(modelgrid=modelgrid) + pmv = PlotMapView(modelgrid=modelgrid, layer=layer) contour_set = pmv.contour_array(a=a, **kwargs) return contour_set diff --git a/flopy/plot/map.py b/flopy/plot/map.py index 98ecdd44d8..27a7ab76db 100644 --- a/flopy/plot/map.py +++ b/flopy/plot/map.py @@ -156,12 +156,16 @@ def plot_array(self, a, masked_values=None, **kwargs): def contour_array(self, a, masked_values=None, **kwargs): """ - Contour an array. If the array is three-dimensional, then the method - will contour the layer tied to this class (self.layer). + Contour an array on the grid. By default the top layer + is contoured. To select a different layer, specify the + layer in the class constructor. + + For structured and vertex grids, the array may be 1D, 2D or 3D. + For unstructured grids, the array must be 1D or 2D. Parameters ---------- - a : numpy.ndarray + a : 1D, 2D or 3D array-like Array to plot. masked_values : iterable of floats, ints Values to mask. From 4ef699e10b1fbd80055b85ca00a17b96d1c627ce Mon Sep 17 00:00:00 2001 From: Joshua Larsen Date: Tue, 31 Oct 2023 17:00:31 -0700 Subject: [PATCH 081/101] update(model_splitter.py): (#1994) * add check for no boundary condition instances in list based input packages. Do not write package in this case * update options setting routine to respect previously set options for split models * remove maxbound variable from mapped_data for list based input packages. Allows for dynamic maxbound setting by flopy built ins * update notebook example to skip None packages --- .../mf6_parallel_model_splitting_example.py | 2 +- autotest/test_model_splitter.py | 106 ++++++++++++++++++ flopy/mf6/utils/model_splitter.py | 13 ++- 3 files changed, 118 insertions(+), 3 deletions(-) diff --git a/.docs/Notebooks/mf6_parallel_model_splitting_example.py b/.docs/Notebooks/mf6_parallel_model_splitting_example.py index fee4fffd4b..35c535c4bc 100644 --- a/.docs/Notebooks/mf6_parallel_model_splitting_example.py +++ b/.docs/Notebooks/mf6_parallel_model_splitting_example.py @@ -188,7 +188,7 @@ pak = model.get_package(pkg) try: rarrays[ix + 1] = pak.stress_period_data.data[0] - except TypeError: + except (TypeError, AttributeError): pass recarray = mfsplit.reconstruct_recarray(rarrays) if pkg == "riv": diff --git a/autotest/test_model_splitter.py b/autotest/test_model_splitter.py index 34614cd14e..467b53e590 100644 --- a/autotest/test_model_splitter.py +++ b/autotest/test_model_splitter.py @@ -366,3 +366,109 @@ def test_control_records(function_tmpdir): raise AssertionError( "External binary file input not being preseved for MFList" ) + + +@requires_exe("mf6") +def test_empty_packages(function_tmpdir): + new_sim_path = function_tmpdir / "split_model" + + sim = flopy.mf6.MFSimulation(sim_ws=new_sim_path) + ims = flopy.mf6.ModflowIms(sim, print_option="all", complexity="simple") + tdis = flopy.mf6.ModflowTdis(sim) + + nrow, ncol = 1, 14 + base_name = "sfr01gwfgwf" + gwf = flopy.mf6.ModflowGwf(sim, modelname=base_name, save_flows=True) + dis = flopy.mf6.ModflowGwfdis(gwf, nrow=1, ncol=14, top=0., botm=-1.) + npf = flopy.mf6.ModflowGwfnpf( + gwf, + save_flows=True, + save_specific_discharge=True, + icelltype=0, + k=20., + k33=20. + ) + ic = flopy.mf6.ModflowGwfic(gwf, strt=0.) + chd = flopy.mf6.ModflowGwfchd( + gwf, + stress_period_data={0: [((0, 0, 13), 0.0),]} + ) + wel = flopy.mf6.ModflowGwfwel( + gwf, + stress_period_data={0: [((0, 0, 0), 1.),]} + ) + + # Build SFR records + packagedata = [ + (0, (0, 0, 0), 1., 1., 1., 0., 0.1, 0., 1., 1, 1., 0), + (1, (0, 0, 1), 1., 1., 1., 0., 0.1, 0., 1., 2, 1., 0), + (2, (0, 0, 2), 1., 1., 1., 0., 0.1, 0., 1., 2, 1., 0), + (3, (0, 0, 3), 1., 1., 1., 0., 0.1, 0., 1., 2, 1., 0), + (4, (0, 0, 4), 1., 1., 1., 0., 0.1, 0., 1., 2, 1., 0), + (5, (0, 0, 5), 1., 1., 1., 0., 0.1, 0., 1., 2, 1., 0), + (6, (0, 0, 6), 1., 1., 1., 0., 0.1, 0., 1., 2, 1., 0), + (7, (0, 0, 7), 1., 1., 1., 0., 0.1, 0., 1., 2, 1., 0), + (8, (0, 0, 8), 1., 1., 1., 0., 0.1, 0., 1., 2, 1., 0), + (9, (0, 0, 9), 1., 1., 1., 0., 0.1, 0., 1., 2, 1., 0), + (10, (0, 0, 10), 1., 1., 1., 0., 0.1, 0., 1., 2, 1., 0), + (11, (0, 0, 11), 1., 1., 1., 0., 0.1, 0., 1., 2, 1., 0), + (12, (0, 0, 12), 1., 1., 1., 0., 0.1, 0., 1., 2, 1., 0), + (13, (0, 0, 13), 1., 1., 1., 0., 0.1, 0., 1., 1, 1., 0), + ] + + connectiondata = [ + (0, -1), + (1, 0, -2), + (2, 1, -3), + (3, 2, -4), + (4, 3, -5), + (5, 4, -6), + (6, 5, -7), + (7, 6, -8), + (8, 7, -9), + (9, 8, -10), + (10, 9, -11), + (11, 10, -12), + (12, 11, -13), + (13, 12) + ] + + sfr = flopy.mf6.ModflowGwfsfr( + gwf, + print_input=True, + print_stage=True, + print_flows=True, + save_flows=True, + stage_filerecord=f"{base_name}.sfr.stg", + budget_filerecord=f"{base_name}.sfr.bud", + nreaches=14, + packagedata=packagedata, + connectiondata=connectiondata, + perioddata={0: [(0, "INFLOW", 1.),]} + ) + + array = np.zeros((nrow, ncol), dtype=int) + array[0, 7:] = 1 + mfsplit = Mf6Splitter(sim) + new_sim = mfsplit.split_model(array) + + m0 = new_sim.get_model(f"{base_name}_0") + m1 = new_sim.get_model(f"{base_name}_1") + + if "chd_0" in m0.package_dict: + raise AssertionError( + f"Empty CHD file written to {base_name}_0 model" + ) + + if "wel_0" in m1.package_dict: + raise AssertionError( + f"Empty WEL file written to {base_name}_1 model" + ) + + mvr_status0 = m0.sfr.mover.array + mvr_status1 = m0.sfr.mover.array + + if not mvr_status0 or not mvr_status1: + raise AssertionError( + "Mover status being overwritten in options splitting" + ) diff --git a/flopy/mf6/utils/model_splitter.py b/flopy/mf6/utils/model_splitter.py index 46ecc4e768..2cdd455d23 100644 --- a/flopy/mf6/utils/model_splitter.py +++ b/flopy/mf6/utils/model_splitter.py @@ -1823,9 +1823,11 @@ def _remap_sfr(self, package, mapped_data): # connect model network through movers between models mvr_recs = [] + mvr_mdl_set = set() for rec in sfr_mvr_conn: m0, n0 = sfr_remaps[rec[0]] m1, n1 = sfr_remaps[rec[1]] + mvr_mdl_set = mvr_mdl_set | {m0, m1} mvr_recs.append( ( self._model_dict[m0].name, @@ -1842,6 +1844,7 @@ def _remap_sfr(self, package, mapped_data): for idv, rec in div_mvr_conn.items(): m0, n0 = sfr_remaps[rec[0]] m1, n1 = sfr_remaps[rec[1]] + mvr_mdl_set = mvr_mdl_set | {m0, m1} mvr_recs.append( ( self._model_dict[m0].name, @@ -1856,7 +1859,7 @@ def _remap_sfr(self, package, mapped_data): ) if mvr_recs: - for mkey in self._model_dict.keys(): + for mkey in mvr_mdl_set: if not mapped_data[mkey]: continue mapped_data[mkey]["mover"] = True @@ -2954,7 +2957,9 @@ def _remap_package(self, package, ismvr=False): for mkey in mapped_data.keys(): if mapped_data[mkey]: - if isinstance(value, mfdatascalar.MFScalar): + if item in mapped_data[mkey]: + continue + elif isinstance(value, mfdatascalar.MFScalar): mapped_data[mkey][item] = value.data elif isinstance(value, mfdatalist.MFList): mapped_data[mkey][item] = value.array @@ -2964,7 +2969,11 @@ def _remap_package(self, package, ismvr=False): ) paks = {} for mdl, data in mapped_data.items(): + _ = mapped_data.pop("maxbound", None) if mapped_data[mdl]: + if "stress_period_data" in mapped_data[mdl]: + if not mapped_data[mdl]["stress_period_data"]: + continue paks[mdl] = pak_cls( self._model_dict[mdl], pname=package.name[0], **data ) From 696a209416f91bc4a9e61b4392f4218e9bd83408 Mon Sep 17 00:00:00 2001 From: wpbonelli Date: Fri, 3 Nov 2023 16:30:37 -0400 Subject: [PATCH 082/101] refactor(modflow): remove deprecated features (#1893) * nwt_11_fmt from mfuzf1.py, 3.3.1 (June 2020) * raise ValueError if ml.version is mfusg in mf.py, 3.3.4 (August 2021) --- flopy/modflow/mf.py | 10 +++---- flopy/modflow/mfuzf1.py | 60 +++++++---------------------------------- 2 files changed, 13 insertions(+), 57 deletions(-) diff --git a/flopy/modflow/mf.py b/flopy/modflow/mf.py index 5aacf266d1..c02ecf8398 100644 --- a/flopy/modflow/mf.py +++ b/flopy/modflow/mf.py @@ -756,14 +756,10 @@ def load( # update the modflow version ml.set_version(version) - # Impending deprecation warning to switch to using - # flopy.mfusg.MfUsg() instead of flopy.modflow.Modflow() + # DEPRECATED since version 3.3.4 if ml.version == "mfusg": - warnings.warn( - "flopy.modflow.Modflow() for mfusg models has been deprecated, " - " and will be removed in the next release. Please switch to using" - " flopy.mfusg.MfUsg() instead.", - DeprecationWarning, + raise ValueError( + "flopy.modflow.Modflow no longer supports mfusg; use flopy.mfusg.MfUsg() instead" ) # reset unit number for glo file diff --git a/flopy/modflow/mfuzf1.py b/flopy/modflow/mfuzf1.py index 82548d2725..c680876407 100644 --- a/flopy/modflow/mfuzf1.py +++ b/flopy/modflow/mfuzf1.py @@ -139,15 +139,6 @@ class ModflowUzf1(Package): followed by a series of depths and water contents in the unsaturated zone. - nwt_11_fmt : boolean - flag indicating whether or not to utilize a newer (MODFLOW-NWT - version 1.1 or later) format style, i.e., uzf1 optional variables - appear line-by-line rather than in a specific order on a single - line. True means that optional variables (e.g., SPECIFYTHTR, - SPECIFYTHTI, NOSURFLEAK) appear on new lines. True also supports - a number of newer optional variables (e.g., SPECIFYSURFK, - REJECTSURFK, SEEPSURFK). False means that optional variables - appear on one line. (default is False) specifythtr : boolean key word for specifying optional input variable THTR (default is 0) specifythti : boolean @@ -382,7 +373,6 @@ def __init__( air_entry=0.0, hroot=0.0, rootact=0.0, - nwt_11_fmt=False, specifysurfk=False, rejectsurfk=False, seepsurfk=False, @@ -481,15 +471,6 @@ def __init__( self.url = "uzf_-_unsaturated_zone_flow_pa_3.html" # Data Set 1a - if nwt_11_fmt: - warnings.warn( - "nwt_11_fmt has been deprecated," - " and will be removed in the next release" - " please provide a flopy.utils.OptionBlock object" - " to the options argument", - DeprecationWarning, - ) - self.nwt_11_fmt = nwt_11_fmt self.specifythtr = bool(specifythtr) self.specifythti = bool(specifythti) self.nosurfleak = bool(nosurfleak) @@ -695,37 +676,16 @@ def _ncells(self): return nrow * ncol def _write_1a(self, f_uzf): - # the nwt_11_fmt code is slated for removal (deprecated!) - if not self.nwt_11_fmt: - specify_temp = "" - if self.specifythtr > 0: - specify_temp += "SPECIFYTHTR " - if self.specifythti > 0: - specify_temp += "SPECIFYTHTI " - if self.nosurfleak > 0: - specify_temp += "NOSURFLEAK" - if (self.specifythtr + self.specifythti + self.nosurfleak) > 0: - f_uzf.write(f"{specify_temp}\n") - del specify_temp - else: - txt = "options\n" - for var in [ - "specifythtr", - "specifythti", - "nosurfleak", - "specifysurfk", - "rejectsurfk", - "seepsurfk", - ]: - value = self.__dict__[var] - if int(value) > 0: - txt += f"{var}\n" - if self.etsquare: - txt += f"etsquare {self.smoothfact}\n" - if self.netflux: - txt += f"netflux {self.unitrech} {self.unitdis}\n" - txt += "end\n" - f_uzf.write(txt) + specify_temp = "" + if self.specifythtr > 0: + specify_temp += "SPECIFYTHTR " + if self.specifythti > 0: + specify_temp += "SPECIFYTHTI " + if self.nosurfleak > 0: + specify_temp += "NOSURFLEAK" + if (self.specifythtr + self.specifythti + self.nosurfleak) > 0: + f_uzf.write(f"{specify_temp}\n") + del specify_temp def write_file(self, f=None): """ From 5ebc216822a86f32c7a2ab9a3735f021475dc0f4 Mon Sep 17 00:00:00 2001 From: jdhughes-usgs Date: Mon, 13 Nov 2023 16:17:33 -0600 Subject: [PATCH 083/101] fix(model_splitter): check keys in mftransient array (#1998) * fix issue with MFScalarTransient data in splitter (storage package period data) * add test for MFScalarTransient issue --- autotest/test_model_splitter.py | 168 ++++++++++++++++++++++++------ flopy/mf6/utils/model_splitter.py | 37 ++++--- 2 files changed, 164 insertions(+), 41 deletions(-) diff --git a/autotest/test_model_splitter.py b/autotest/test_model_splitter.py index 467b53e590..f7dca705e5 100644 --- a/autotest/test_model_splitter.py +++ b/autotest/test_model_splitter.py @@ -356,7 +356,7 @@ def test_control_records(function_tmpdir): spd_ls1 = ml1.wel.stress_period_data.get_record(1) spd_ls2 = ml1.wel.stress_period_data.get_record(2) - + if spd_ls1["filename"] is None or spd_ls1["binary"]: raise AssertionError( "External ascii files not being preserved for MFList" @@ -379,41 +379,49 @@ def test_empty_packages(function_tmpdir): nrow, ncol = 1, 14 base_name = "sfr01gwfgwf" gwf = flopy.mf6.ModflowGwf(sim, modelname=base_name, save_flows=True) - dis = flopy.mf6.ModflowGwfdis(gwf, nrow=1, ncol=14, top=0., botm=-1.) + dis = flopy.mf6.ModflowGwfdis(gwf, nrow=1, ncol=14, top=0.0, botm=-1.0) npf = flopy.mf6.ModflowGwfnpf( gwf, save_flows=True, save_specific_discharge=True, icelltype=0, - k=20., - k33=20. + k=20.0, + k33=20.0, ) - ic = flopy.mf6.ModflowGwfic(gwf, strt=0.) + ic = flopy.mf6.ModflowGwfic(gwf, strt=0.0) chd = flopy.mf6.ModflowGwfchd( gwf, - stress_period_data={0: [((0, 0, 13), 0.0),]} + stress_period_data={ + 0: [ + ((0, 0, 13), 0.0), + ] + }, ) wel = flopy.mf6.ModflowGwfwel( gwf, - stress_period_data={0: [((0, 0, 0), 1.),]} + stress_period_data={ + 0: [ + ((0, 0, 0), 1.0), + ] + }, ) # Build SFR records packagedata = [ - (0, (0, 0, 0), 1., 1., 1., 0., 0.1, 0., 1., 1, 1., 0), - (1, (0, 0, 1), 1., 1., 1., 0., 0.1, 0., 1., 2, 1., 0), - (2, (0, 0, 2), 1., 1., 1., 0., 0.1, 0., 1., 2, 1., 0), - (3, (0, 0, 3), 1., 1., 1., 0., 0.1, 0., 1., 2, 1., 0), - (4, (0, 0, 4), 1., 1., 1., 0., 0.1, 0., 1., 2, 1., 0), - (5, (0, 0, 5), 1., 1., 1., 0., 0.1, 0., 1., 2, 1., 0), - (6, (0, 0, 6), 1., 1., 1., 0., 0.1, 0., 1., 2, 1., 0), - (7, (0, 0, 7), 1., 1., 1., 0., 0.1, 0., 1., 2, 1., 0), - (8, (0, 0, 8), 1., 1., 1., 0., 0.1, 0., 1., 2, 1., 0), - (9, (0, 0, 9), 1., 1., 1., 0., 0.1, 0., 1., 2, 1., 0), - (10, (0, 0, 10), 1., 1., 1., 0., 0.1, 0., 1., 2, 1., 0), - (11, (0, 0, 11), 1., 1., 1., 0., 0.1, 0., 1., 2, 1., 0), - (12, (0, 0, 12), 1., 1., 1., 0., 0.1, 0., 1., 2, 1., 0), - (13, (0, 0, 13), 1., 1., 1., 0., 0.1, 0., 1., 1, 1., 0), + (0, (0, 0, 0), 1.0, 1.0, 1.0, 0.0, 0.1, 0.0, 1.0, 1, 1.0, 0), + (1, (0, 0, 1), 1.0, 1.0, 1.0, 0.0, 0.1, 0.0, 1.0, 2, 1.0, 0), + (2, (0, 0, 2), 1.0, 1.0, 1.0, 0.0, 0.1, 0.0, 1.0, 2, 1.0, 0), + (3, (0, 0, 3), 1.0, 1.0, 1.0, 0.0, 0.1, 0.0, 1.0, 2, 1.0, 0), + (4, (0, 0, 4), 1.0, 1.0, 1.0, 0.0, 0.1, 0.0, 1.0, 2, 1.0, 0), + (5, (0, 0, 5), 1.0, 1.0, 1.0, 0.0, 0.1, 0.0, 1.0, 2, 1.0, 0), + (6, (0, 0, 6), 1.0, 1.0, 1.0, 0.0, 0.1, 0.0, 1.0, 2, 1.0, 0), + (7, (0, 0, 7), 1.0, 1.0, 1.0, 0.0, 0.1, 0.0, 1.0, 2, 1.0, 0), + (8, (0, 0, 8), 1.0, 1.0, 1.0, 0.0, 0.1, 0.0, 1.0, 2, 1.0, 0), + (9, (0, 0, 9), 1.0, 1.0, 1.0, 0.0, 0.1, 0.0, 1.0, 2, 1.0, 0), + (10, (0, 0, 10), 1.0, 1.0, 1.0, 0.0, 0.1, 0.0, 1.0, 2, 1.0, 0), + (11, (0, 0, 11), 1.0, 1.0, 1.0, 0.0, 0.1, 0.0, 1.0, 2, 1.0, 0), + (12, (0, 0, 12), 1.0, 1.0, 1.0, 0.0, 0.1, 0.0, 1.0, 2, 1.0, 0), + (13, (0, 0, 13), 1.0, 1.0, 1.0, 0.0, 0.1, 0.0, 1.0, 1, 1.0, 0), ] connectiondata = [ @@ -430,7 +438,7 @@ def test_empty_packages(function_tmpdir): (10, 9, -11), (11, 10, -12), (12, 11, -13), - (13, 12) + (13, 12), ] sfr = flopy.mf6.ModflowGwfsfr( @@ -444,7 +452,11 @@ def test_empty_packages(function_tmpdir): nreaches=14, packagedata=packagedata, connectiondata=connectiondata, - perioddata={0: [(0, "INFLOW", 1.),]} + perioddata={ + 0: [ + (0, "INFLOW", 1.0), + ] + }, ) array = np.zeros((nrow, ncol), dtype=int) @@ -456,14 +468,10 @@ def test_empty_packages(function_tmpdir): m1 = new_sim.get_model(f"{base_name}_1") if "chd_0" in m0.package_dict: - raise AssertionError( - f"Empty CHD file written to {base_name}_0 model" - ) + raise AssertionError(f"Empty CHD file written to {base_name}_0 model") if "wel_0" in m1.package_dict: - raise AssertionError( - f"Empty WEL file written to {base_name}_1 model" - ) + raise AssertionError(f"Empty WEL file written to {base_name}_1 model") mvr_status0 = m0.sfr.mover.array mvr_status1 = m0.sfr.mover.array @@ -472,3 +480,105 @@ def test_empty_packages(function_tmpdir): raise AssertionError( "Mover status being overwritten in options splitting" ) + + +@requires_exe("mf6") +def test_transient_array(function_tmpdir): + name = "tarr" + new_sim_path = function_tmpdir / f"{name}_split_model" + nper = 3 + tdis_data = [ + (300000.0, 1, 1.0), + (36500.0, 10, 1.5), + (300000, 1, 1.0), + ] + nlay, nrow, ncol = 3, 21, 20 + xlen, ylen = 10000.0, 10500.0 + delc = ylen / nrow + delr = xlen / ncol + steady = {0: True, 2: True} + transient = {1: True} + + sim = flopy.mf6.MFSimulation(sim_name=name, sim_ws=new_sim_path) + ims = flopy.mf6.ModflowIms(sim, complexity="simple") + tdis = flopy.mf6.ModflowTdis(sim, nper=nper, perioddata=tdis_data) + + gwf = flopy.mf6.ModflowGwf( + sim, + modelname=name, + save_flows=True, + ) + dis = flopy.mf6.ModflowGwfdis( + gwf, + nlay=nlay, + nrow=nrow, + ncol=ncol, + delr=delr, + delc=delc, + top=400, + botm=[220.0, 200.0, 0], + length_units="meters", + ) + npf = flopy.mf6.ModflowGwfnpf( + gwf, + icelltype=1, + k=[50.0, 0.01, 200.0], + k33=[10.0, 0.01, 20.0], + ) + sto = flopy.mf6.ModflowGwfsto( + gwf, + iconvert=1, + ss=0.0001, + sy=0.1, + steady_state=steady, + transient=transient, + ) + ic = flopy.mf6.ModflowGwfic(gwf, strt=400.0) + oc = flopy.mf6.ModflowGwfoc( + gwf, + head_filerecord=f"{name}.hds", + budget_filerecord=f"{name}.cbc", + saverecord={0: [("head", "all"), ("budget", "all")]}, + ) + + rch = flopy.mf6.ModflowGwfrcha(gwf, recharge=0.005) + + chd_spd = [(0, i, ncol - 1, 320) for i in range(nrow)] + chd = flopy.mf6.ModflowGwfchd( + gwf, + stress_period_data=chd_spd, + ) + + well_spd = { + 0: [(0, 10, 9, -75000.0)], + 2: [(0, 10, 9, -75000.0), (2, 12, 4, -100000.0)], + } + wel = flopy.mf6.ModflowGwfwel( + gwf, + stress_period_data=well_spd, + ) + + sarr = np.ones((nrow, ncol), dtype=int) + sarr[:, int(ncol / 2) :] = 2 + mfsplit = Mf6Splitter(sim) + new_sim = mfsplit.split_model(sarr) + + for name in new_sim.model_names: + g = new_sim.get_model(name) + d = {} + for key in ( + 0, + 2, + ): + d[key] = g.sto.steady_state.get_data(key).get_data() + assert d == steady, ( + "storage steady_state dictionary " + + f"does not match for model '{name}'" + ) + d = {} + for key in (1,): + d[key] = g.sto.transient.get_data(key).get_data() + assert d == transient, ( + "storage package transient dictionary " + + f"does not match for model '{name}'" + ) diff --git a/flopy/mf6/utils/model_splitter.py b/flopy/mf6/utils/model_splitter.py index 2cdd455d23..c5eafd46ae 100644 --- a/flopy/mf6/utils/model_splitter.py +++ b/flopy/mf6/utils/model_splitter.py @@ -1076,20 +1076,30 @@ def _remap_transient_array(self, item, mftransient, mapped_data): d0 = {mkey: {} for mkey in self._model_dict.keys()} for per, array in enumerate(mftransient.array): - storage = mftransient._data_storage[per] - how = [ - i.data_storage_type.value - for i in storage.layer_storage.multi_dim_list - ] - binary = [i.binary for i in storage.layer_storage.multi_dim_list] - fnames = [i.fname for i in storage.layer_storage.multi_dim_list] + if per in mftransient._data_storage.keys(): + storage = mftransient._data_storage[per] + how = [ + i.data_storage_type.value + for i in storage.layer_storage.multi_dim_list + ] + binary = [ + i.binary for i in storage.layer_storage.multi_dim_list + ] + fnames = [ + i.fname for i in storage.layer_storage.multi_dim_list + ] - d = self._remap_array( - item, array, mapped_data, how=how, binary=binary, fnames=fnames - ) + d = self._remap_array( + item, + array, + mapped_data, + how=how, + binary=binary, + fnames=fnames, + ) - for mkey in d.keys(): - d0[mkey][per] = d[mkey][item] + for mkey in d.keys(): + d0[mkey][per] = d[mkey][item] for mkey, values in d0.items(): mapped_data[mkey][item] = values @@ -2936,6 +2946,9 @@ def _remap_package(self, package, ismvr=False): value = list_data mapped_data = self._remap_mflist(item, value, mapped_data) + elif isinstance(value, mfdatascalar.MFScalarTransient): + for mkey in self._model_dict.keys(): + mapped_data[mkey][item] = value._data_storage elif isinstance(value, mfdatascalar.MFScalar): for mkey in self._model_dict.keys(): mapped_data[mkey][item] = value.data From af8954b45903680a2931573453228b03ea7beb6b Mon Sep 17 00:00:00 2001 From: wpbonelli Date: Mon, 13 Nov 2023 17:24:46 -0500 Subject: [PATCH 084/101] refactor: support python3.12, simplify tests and dependencies (#1999) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * use µmamba in CI tests to avoid pyzmq issue * remove pytest-virtualenv as test dependency * rewrite generate_classes tests with virtualenv * minor simplifications to autotest/conftest.py * add python 3.12 classifier to pyproject.toml * add code generation test marker to pytest.ini * add IPRN value where missing in some example input files * add pip and replace bmipy with xmipy in etc/environment.yml * deduplicate in CI workflows (consolidate windows/non-windows) --- .docs/md/generate_classes.md | 2 +- .github/workflows/benchmark.yml | 44 ++---------- .github/workflows/commit.yml | 67 ++++-------------- .github/workflows/examples.yml | 54 +++------------ .github/workflows/regression.yml | 48 ++----------- CONTRIBUTING.md | 8 ++- DEVELOPER.md | 14 ++-- autotest/conftest.py | 20 ++---- autotest/pytest.ini | 11 +-- autotest/test_generate_classes.py | 68 ++++++++++++------- etc/environment.yml | 12 ++-- .../data/mf6/test006_2models_mvr/model1.dis | 2 +- .../data/mf6/test006_2models_mvr/model1.ic | 2 +- examples/data/mf6/test006_gwf3/flow.ic | 2 +- pyproject.toml | 4 +- 15 files changed, 111 insertions(+), 247 deletions(-) diff --git a/.docs/md/generate_classes.md b/.docs/md/generate_classes.md index 7056918174..05c2300d40 100644 --- a/.docs/md/generate_classes.md +++ b/.docs/md/generate_classes.md @@ -100,4 +100,4 @@ By default, a backup is made of FloPy's package classes before rewriting them. T ## Testing class generation -Tests for the `generate_classes()` utility are located in `test_generate_classes.py`. The tests depend on [`pytest-virtualenv`](https://pypi.org/project/pytest-virtualenv/) and will be skipped if run in parallel without the `--dist loadfile` option for `pytest-xdist`. +Tests for the `generate_classes()` utility are located in `test_generate_classes.py`. The tests depend on [`virtualenv`](https://pypi.org/project/virtualenv/) and will be skipped if run in parallel without the `--dist loadfile` option for `pytest-xdist`. \ No newline at end of file diff --git a/.github/workflows/benchmark.yml b/.github/workflows/benchmark.yml index e72415c424..645b1d930a 100644 --- a/.github/workflows/benchmark.yml +++ b/.github/workflows/benchmark.yml @@ -12,37 +12,21 @@ jobs: fail-fast: false matrix: os: [ ubuntu-latest, macos-latest, windows-latest ] - python-version: [ 3.8, 3.9, "3.10", "3.11" ] + python-version: [ 3.8, 3.9, "3.10", "3.11", "3.12" ] exclude: # avoid shutil.copytree infinite recursion bug # https://github.com/python/cpython/pull/17098 - python-version: '3.8.0' defaults: run: - shell: bash + shell: bash -l {0} timeout-minutes: 90 steps: - name: Checkout repo uses: actions/checkout@v4 - - name: Setup Python - if: runner.os != 'Windows' - uses: actions/setup-python@v4 - with: - python-version: ${{ matrix.python-version }} - cache: 'pip' - cache-dependency-path: pyproject.toml - - - name: Install Python dependencies - if: runner.os != 'Windows' - run: | - pip install --upgrade pip - pip install . - pip install ".[test, optional]" - - name: Setup Micromamba - if: runner.os == 'Windows' uses: mamba-org/setup-micromamba@v1 with: environment-file: etc/environment.yml @@ -54,29 +38,14 @@ jobs: bash powershell - - name: Install extra Python dependencies - if: runner.os == 'Windows' - shell: bash -l {0} - run: | - pip install xmipy - pip install . + - name: Install FloPy + run: pip install . - name: Install Modflow executables uses: modflowpy/install-modflow-action@v1 - name: Run benchmarks - if: runner.os != 'Windows' - working-directory: ./autotest - run: | - mkdir -p .benchmarks - pytest -v --durations=0 --benchmark-only --benchmark-json .benchmarks/${{ matrix.os }}_python${{ matrix.python-version }}.json --keep-failed=.failed - env: - GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} - - - name: Run benchmarks - if: runner.os == 'Windows' - shell: bash -l {0} - working-directory: ./autotest + working-directory: autotest run: | mkdir -p .benchmarks pytest -v --durations=0 --benchmark-only --benchmark-json .benchmarks/${{ matrix.os }}_python${{ matrix.python-version }}.json --keep-failed=.failed @@ -115,7 +84,7 @@ jobs: - name: Setup Python uses: actions/setup-python@v4 with: - python-version: 3.8 + python-version: 3.12 cache: 'pip' cache-dependency-path: pyproject.toml @@ -151,7 +120,6 @@ jobs: fi python ./scripts/process_benchmarks.py ./autotest/.benchmarks ./autotest/.benchmarks env: - ARTIFACTS: ${{steps.run_tests.outputs.artifact_ids}} GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} - name: Upload benchmark results diff --git a/.github/workflows/commit.yml b/.github/workflows/commit.yml index 7765c4fd6c..d151a4319d 100644 --- a/.github/workflows/commit.yml +++ b/.github/workflows/commit.yml @@ -32,11 +32,10 @@ jobs: python -c "import flopy; print(f'{flopy.__version__}')" - name: Build package - run: | - python -m build + run: python -m build + - name: Check package - run: | - twine check --strict dist/* + run: twine check --strict dist/* lint: name: Lint @@ -119,7 +118,7 @@ jobs: run: | pip install --upgrade pip pip install . - pip install ".[test,optional]" + pip install ".[test, optional]" - name: Install Modflow executables uses: modflowpy/install-modflow-action@v1 @@ -135,13 +134,13 @@ jobs: if: failure() with: name: failed-smoke-${{ runner.os }}-${{ env.PYTHON_VERSION }} - path: ./autotest/.failed/** + path: autotest/.failed/** - name: Upload coverage if: github.repository_owner == 'modflowpy' && (github.event_name == 'push' || github.event_name == 'pull_request') uses: codecov/codecov-action@v3 with: - files: ./autotest/coverage.xml + files: autotest/coverage.xml test: name: Test @@ -151,37 +150,21 @@ jobs: fail-fast: false matrix: os: [ ubuntu-latest, macos-latest, windows-latest ] - python-version: [ 3.8, 3.9, "3.10", "3.11" ] + python-version: [ 3.8, 3.9, "3.10", "3.11", "3.12" ] exclude: # avoid shutil.copytree infinite recursion bug # https://github.com/python/cpython/pull/17098 - python-version: '3.8.0' defaults: run: - shell: bash + shell: bash -l {0} timeout-minutes: 45 steps: - name: Checkout repo uses: actions/checkout@v4 - - name: Setup Python - if: runner.os != 'Windows' - uses: actions/setup-python@v4 - with: - python-version: ${{ matrix.python-version }} - cache: 'pip' - cache-dependency-path: pyproject.toml - - - name: Install Python dependencies - if: runner.os != 'Windows' - run: | - pip install --upgrade pip - pip install . - pip install ".[test, optional]" - - name: Setup Micromamba - if: runner.os == 'Windows' uses: mamba-org/setup-micromamba@v1 with: environment-file: etc/environment.yml @@ -193,12 +176,8 @@ jobs: bash powershell - - name: Install Python dependencies - shell: bash -l {0} - if: runner.os == 'Windows' - run: | - pip install xmipy - pip install . + - name: Install FloPy + run: pip install . - name: Install Modflow-related executables uses: modflowpy/install-modflow-action@v1 @@ -208,28 +187,11 @@ jobs: with: repo: modflow6-nightly-build - - name: Update FloPy packages - if: runner.os != 'Windows' - run: python -m flopy.mf6.utils.generate_classes --ref develop --no-backup - - - name: Update FloPy packages - if: runner.os == 'Windows' - shell: bash -l {0} + - name: Update package classes run: python -m flopy.mf6.utils.generate_classes --ref develop --no-backup - name: Run tests - if: runner.os != 'Windows' - working-directory: ./autotest - run: | - pytest -v -m="not example and not regression" -n=auto --cov=flopy --cov-append --cov-report=xml --durations=0 --keep-failed=.failed --dist loadfile - coverage report - env: - GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} - - - name: Run tests - if: runner.os == 'Windows' - shell: bash -l {0} - working-directory: ./autotest + working-directory: autotest run: | pytest -v -m="not example and not regression" -n=auto --cov=flopy --cov-append --cov-report=xml --durations=0 --keep-failed=.failed --dist loadfile coverage report @@ -241,11 +203,10 @@ jobs: if: failure() with: name: failed-${{ matrix.os }}-${{ matrix.python-version }} - path: | - ./autotest/.failed/** + path: autotest/.failed/** - name: Upload coverage if: github.repository_owner == 'modflowpy' && (github.event_name == 'push' || github.event_name == 'pull_request') uses: codecov/codecov-action@v3 with: - files: ./autotest/coverage.xml + files: autotest/coverage.xml diff --git a/.github/workflows/examples.yml b/.github/workflows/examples.yml index fe9b014b3b..5540c9f0cc 100644 --- a/.github/workflows/examples.yml +++ b/.github/workflows/examples.yml @@ -12,36 +12,20 @@ jobs: fail-fast: false matrix: os: [ ubuntu-latest, macos-latest, windows-latest ] - python-version: [ 3.8, 3.9, "3.10", "3.11" ] + python-version: [ 3.8, 3.9, "3.10", "3.11", "3.12" ] exclude: # avoid shutil.copytree infinite recursion bug # https://github.com/python/cpython/pull/17098 - python-version: '3.8.0' defaults: run: - shell: bash + shell: bash -l {0} timeout-minutes: 90 steps: - name: Checkout repo uses: actions/checkout@v4 - - name: Setup Python - if: runner.os != 'Windows' - uses: actions/setup-python@v4 - with: - python-version: ${{ matrix.python-version }} - cache: 'pip' - cache-dependency-path: pyproject.toml - - - name: Install Python dependencies - if: runner.os != 'Windows' - run: | - pip install --upgrade pip - pip install . - pip install ".[test, optional]" - - name: Setup Micromamba - if: runner.os == 'Windows' uses: mamba-org/setup-micromamba@v1 with: environment-file: etc/environment.yml @@ -53,14 +37,10 @@ jobs: bash powershell - - name: Install extra Python dependencies - if: runner.os == 'Windows' - shell: bash -l {0} - run: | - pip install xmipy - pip install . + - name: Install FloPy + run: pip install . - - name: Workaround OpenGL issue on Linux + - name: OpenGL workaround on Linux if: runner.os == 'Linux' run: | # referenced from https://github.com/pyvista/pyvista/blob/main/.github/workflows/vtk-pre-test.yml#L53 @@ -85,28 +65,11 @@ jobs: repo: modflow6-nightly-build - name: Update FloPy packages - if: runner.os != 'Windows' - run: python -m flopy.mf6.utils.generate_classes --ref develop --no-backup - - - name: Update FloPy packages - if: runner.os == 'Windows' - shell: bash -l {0} run: python -m flopy.mf6.utils.generate_classes --ref develop --no-backup - name: Run example tests - if: runner.os != 'Windows' - working-directory: ./autotest - run: | - pytest -v -m="example" -n=auto -s --durations=0 --keep-failed=.failed - env: - GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} - - - name: Run example tests - if: runner.os == 'Windows' - shell: bash -l {0} - working-directory: ./autotest - run: | - pytest -v -m="example" -n=auto -s --durations=0 --keep-failed=.failed + working-directory: autotest + run: pytest -v -m="example" -n=auto -s --durations=0 --keep-failed=.failed env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} @@ -115,5 +78,4 @@ jobs: if: failure() with: name: failed-example-${{ matrix.os }}-${{ matrix.python-version }} - path: | - ./autotest/.failed/** + path: autotest/.failed/** diff --git a/.github/workflows/regression.yml b/.github/workflows/regression.yml index 972f1e1650..b11c9d9d5c 100644 --- a/.github/workflows/regression.yml +++ b/.github/workflows/regression.yml @@ -12,36 +12,20 @@ jobs: fail-fast: false matrix: os: [ ubuntu-latest, macos-latest, windows-latest ] - python-version: [ 3.8, 3.9, "3.10", "3.11" ] + python-version: [ 3.8, 3.9, "3.10", "3.11", "3.12" ] exclude: # avoid shutil.copytree infinite recursion bug # https://github.com/python/cpython/pull/17098 - python-version: '3.8.0' defaults: run: - shell: bash + shell: bash -l {0} timeout-minutes: 90 steps: - name: Checkout repo uses: actions/checkout@v4 - - name: Setup Python - if: runner.os != 'Windows' - uses: actions/setup-python@v4 - with: - python-version: ${{ matrix.python-version }} - cache: 'pip' - cache-dependency-path: pyproject.toml - - - name: Install Python dependencies - if: runner.os != 'Windows' - run: | - pip install --upgrade pip - pip install . - pip install ".[test, optional]" - - name: Setup Micromamba - if: runner.os == 'Windows' uses: mamba-org/setup-micromamba@v1 with: environment-file: etc/environment.yml @@ -53,12 +37,8 @@ jobs: bash powershell - - name: Install extra Python dependencies - if: runner.os == 'Windows' - shell: bash -l {0} - run: | - pip install xmipy - pip install . + - name: Install FloPy + run: pip install . - name: Install Modflow-related executables uses: modflowpy/install-modflow-action@v1 @@ -69,25 +49,10 @@ jobs: repo: modflow6-nightly-build - name: Update FloPy packages - if: runner.os != 'Windows' - run: python -m flopy.mf6.utils.generate_classes --ref develop --no-backup - - - name: Update FloPy packages - if: runner.os == 'Windows' - shell: bash -l {0} run: python -m flopy.mf6.utils.generate_classes --ref develop --no-backup - name: Run regression tests - if: runner.os != 'Windows' - working-directory: ./autotest - run: pytest -v -m="regression" -n=auto --durations=0 --keep-failed=.failed - env: - GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} - - - name: Run regression tests - if: runner.os == 'Windows' - shell: bash -l {0} - working-directory: ./autotest + working-directory: autotest run: pytest -v -m="regression" -n=auto --durations=0 --keep-failed=.failed env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} @@ -97,5 +62,4 @@ jobs: if: failure() with: name: failed-regression-${{ matrix.os }}-${{ matrix.python-version }} - path: | - ./autotest/.failed/** + path: autotest/.failed/** diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index a604641b65..1abb137f0b 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -55,12 +55,14 @@ Before you submit your Pull Request (PR) consider the following guidelines: ``` 4. Create your patch, **including appropriate test cases**. See [DEVELOPER,md](DEVELOPER.md#running-tests) for guidelines for constructing autotests. -5. Run the [isort import sorter](https://github.com/PyCQA/isort) and [black formatter](https://github.com/psf/black). There is a utility script to do this in the `scripts` directory: +5. Run the formatting tools from the project root: ```shell - python ./scripts/pull_request_prepare.py + black -v flopy + isort -v flopy ``` - Note: Pull Requests must pass isort import and black format checks run on the [GitHub actions](https://github.com/modflowpy/flopy/actions) (*linting*) before they will be accepted. isort can be installed using [`pip`](https://pypi.org/project/isort/) and [`conda`](https://anaconda.org/conda-forge/isort). The black formatter can also be installed using [`pip`](https://pypi.org/project/black/) and [`conda`](https://anaconda.org/conda-forge/black). If the Pull Request fails the *linting* job in the [flopy continuous integration](https://github.com/modflowpy/flopy/actions/workflows/commit.yml) workflow, make sure the latest versions of isort and black are installed. + + Note: Pull Requests must pass format checks run on the [GitHub actions](https://github.com/modflowpy/flopy/actions) before they will be accepted. If the Pull Request fails the `lint` job in the [continuous integration](https://github.com/modflowpy/flopy/actions/workflows/commit.yml) workflow, make sure the latest versions of `black` and `isort` are installed (this may require clearing CI caches). 6. Run the full FloPy test suite and ensure that all tests pass: diff --git a/DEVELOPER.md b/DEVELOPER.md index cda40aa717..195f4a9d30 100644 --- a/DEVELOPER.md +++ b/DEVELOPER.md @@ -63,15 +63,15 @@ Then install `flopy` and core dependencies from the project root: pip install . +The `flopy` package has a number of [optional dependencies](.docs/optional_dependencies.md), as well as extra dependencies required for linting, testing, and building documentation. Extra dependencies are listed in the `test`, `lint`, `optional`, and `doc` groups under the `[project.optional-dependencies]` section in `pyproject.toml`. Core, linting, testing and optional dependencies are included in the Conda environment in `etc/environment.yml`. Only core dependencies are included in the PyPI package — to install extra dependency groups with pip, use `pip install ".[]"`. For instance, to install all extra dependency groups: + + pip install ".[test, lint, optional, doc]" + Alternatively, with Anaconda or Miniconda: conda env create -f etc/environment.yml conda activate flopy -The `flopy` package has a number of [optional dependencies](.docs/optional_dependencies.md), as well as extra dependencies required for linting, testing, and building documentation. Extra dependencies are listed in the `test`, `lint`, `optional`, and `doc` groups under the `[project.optional-dependencies]` section in `pyproject.toml`. Core, linting, testing and optional dependencies are included in the Conda environment in `etc/environment.yml`. Only core dependencies are included in the PyPI package — to install extra dependency groups with pip, use `pip install ".[]"`. For instance, to install all extra dependency groups: - - pip install ".[test, lint, optional, doc]" - #### Python IDEs ##### Visual Studio Code @@ -80,7 +80,7 @@ VSCode users on Windows may need to run `conda init`, then open a fresh terminal ```json { - "python.defaultInterpreterPath": "/path/to/your/virtual/environment", + "python.defaultInterpreterPath": "/path/to/environment", "python.terminal.activateEnvironment": true } ``` @@ -110,9 +110,9 @@ wget https://github.com/MODFLOW-USGS/executables/releases/download/8.0/linux.zip unzip linux.zip -d /path/to/your/install/location ``` -Then add the install location to your `PATH` +Then add the install location to the `PATH` - export PATH="/path/to/your/install/location:$PATH" + export PATH="/path/to/install/location:$PATH" ##### Mac diff --git a/autotest/conftest.py b/autotest/conftest.py index b85458c044..0db0d3ec08 100644 --- a/autotest/conftest.py +++ b/autotest/conftest.py @@ -1,18 +1,8 @@ -import importlib -import os import re -import socket -import sys from importlib import metadata -from os import environ -from os.path import basename, normpath from pathlib import Path from platform import system -from shutil import copytree, which -from subprocess import PIPE, Popen -from typing import List, Optional -from urllib import request -from warnings import warn +from typing import List import matplotlib.pyplot as plt import pytest @@ -28,7 +18,7 @@ SHAPEFILE_EXTENSIONS = ["prj", "shx", "dbf"] -# misc utilities +# path utilities def get_project_root_path() -> Path: @@ -43,7 +33,7 @@ def get_flopy_data_path() -> Path: return get_project_root_path() / "flopy" / "data" -# path fixtures +# fixtures @pytest.fixture(scope="session") @@ -66,11 +56,9 @@ def example_shapefiles(example_data_path) -> List[Path]: return [f.resolve() for f in (example_data_path / "prj_test").glob("*")] -# fixture to automatically close any plots (or optionally show them) - - @pytest.fixture(autouse=True) def close_plot(request): + # fixture to automatically close any plots (or optionally show them) yield # plots only shown if requested via CLI flag, diff --git a/autotest/pytest.ini b/autotest/pytest.ini index 8f4616fd37..c94efd1f32 100644 --- a/autotest/pytest.ini +++ b/autotest/pytest.ini @@ -10,8 +10,9 @@ python_files = env_files = .env markers = - slow: tests that don't complete in a few seconds - example: exercise scripts, tutorials and notebooks - regression: tests that compare multiple results - meta: run by other tests (e.g. testing fixtures) - mf6: tests for the mf6 module \ No newline at end of file + example: exercise scripts, tutorials, notebooks + generation: tests for code generation utilities + meta: tests run by other tests + mf6: tests for MODFLOW 6 support + regression: tests comparing multiple versions + slow: tests not completing in a few seconds \ No newline at end of file diff --git a/autotest/test_generate_classes.py b/autotest/test_generate_classes.py index cff9a37f31..db812aa40b 100644 --- a/autotest/test_generate_classes.py +++ b/autotest/test_generate_classes.py @@ -1,5 +1,4 @@ import sys -from datetime import datetime from os import environ from pathlib import Path from pprint import pprint @@ -7,7 +6,8 @@ from warnings import warn import pytest -from modflow_devtools.misc import get_current_branch +from modflow_devtools.misc import get_current_branch, run_cmd +from virtualenv import cli_run branch = get_current_branch() @@ -19,7 +19,20 @@ def nonempty(itr: Iterable): def pytest_generate_tests(metafunc): - # defaults + """ + Test mf6 module code generation on a small, hopefully + fairly representative set of MODFLOW 6 input & output + specification versions, including the develop branch, + the latest official release, and a few older releases + and commits. + + TODO: May make sense to run the full battery of tests + against all of the versions of mf6io flopy guarantees + support for- maybe develop and latest release? Though + some backwards compatibility seems ideal if possible. + This would need changes in GH Actions CI test matrix. + """ + owner = "MODFLOW-USGS" repo = "modflow6" ref = [ @@ -56,16 +69,13 @@ def pytest_generate_tests(metafunc): metafunc.parametrize(key, ref, scope="session") +@pytest.mark.generation @pytest.mark.mf6 @pytest.mark.slow -@pytest.mark.regression -@pytest.mark.skipif( - branch == "master" or branch.startswith("v"), - reason="skip on master and release branches", -) def test_generate_classes_from_github_refs( - request, virtualenv, project_root_path, ref, worker_id + request, project_root_path, ref, worker_id, function_tmpdir ): + # skip if run in parallel with pytest-xdist without --dist loadfile argv = ( request.config.workerinput["mainargv"] if hasattr(request.config, "workerinput") @@ -74,20 +84,22 @@ def test_generate_classes_from_github_refs( if worker_id != "master" and "loadfile" not in argv: pytest.skip("can't run in parallel") - python = virtualenv.python - venv = Path(python).parent - print( - f"Using temp venv at {venv} with python {python} to test class generation from {ref}" - ) + # create virtual environment + venv = function_tmpdir / "venv" + python = venv / "bin" / "python" + pip = venv / "bin" / "pip" + cli_run([str(venv)]) + print(f"Using temp venv at {venv} to test class generation from {ref}") - # install flopy/dependencies - pprint(virtualenv.run(f"pip install {project_root_path}")) - for dependency in ["modflow-devtools"]: - pprint(virtualenv.run(f"pip install {dependency}")) + # install flopy and dependencies + deps = [str(project_root_path), "modflow-devtools"] + for dep in deps: + out, err, ret = run_cmd(str(pip), "install", dep, verbose=True) + assert not ret, out + err # get creation time of files flopy_path = ( - venv.parent + venv / "lib" / f"python{sys.version_info.major}.{sys.version_info.minor}" / "site-packages" @@ -107,12 +119,20 @@ def test_generate_classes_from_github_refs( ref = spl[2] # generate classes - pprint( - virtualenv.run( - "python -m flopy.mf6.utils.generate_classes " - f"--owner {owner} --repo {repo} --ref {ref} --no-backup" - ) + out, err, ret = run_cmd( + str(python), + "-m", + "flopy.mf6.utils.generate_classes", + "--owner", + owner, + "--repo", + repo, + "--ref", + ref, + "--no-backup", + verbose=True, ) + assert not ret, out + err def get_mtime(f): try: diff --git a/etc/environment.yml b/etc/environment.yml index d4a6b62a55..a5a8b3cdc4 100644 --- a/etc/environment.yml +++ b/etc/environment.yml @@ -2,6 +2,8 @@ name: flopy channels: - conda-forge dependencies: + - pip + # required - python>=3.8 - numpy>=1.15.0 @@ -9,6 +11,7 @@ dependencies: # lint - black + - cffconvert - flake8 - isort - pylint @@ -19,15 +22,12 @@ dependencies: - filelock - jupyter - jupytext - - pip - - pip: - - modflow-devtools + - modflow-devtools - pytest - pytest-benchmark - pytest-cases - pytest-cov - pytest-dotenv - - pytest-virtualenv - pytest-xdist - virtualenv @@ -50,6 +50,4 @@ dependencies: - pyvista - imageio - pymetis - - # MODFLOW API dependencies - - bmipy + - xmipy diff --git a/examples/data/mf6/test006_2models_mvr/model1.dis b/examples/data/mf6/test006_2models_mvr/model1.dis index 3e5c3eaa5e..07637f87e5 100644 --- a/examples/data/mf6/test006_2models_mvr/model1.dis +++ b/examples/data/mf6/test006_2models_mvr/model1.dis @@ -11,7 +11,7 @@ end dimensions BEGIN GRIDDATA IDOMAIN - INTERNAL FACTOR 1 IPRN + INTERNAL FACTOR 1 IPRN 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 1 1 diff --git a/examples/data/mf6/test006_2models_mvr/model1.ic b/examples/data/mf6/test006_2models_mvr/model1.ic index 841d89fc74..b1069687b6 100644 --- a/examples/data/mf6/test006_2models_mvr/model1.ic +++ b/examples/data/mf6/test006_2models_mvr/model1.ic @@ -4,7 +4,7 @@ end options BEGIN GRIDDATA strt -INTERNAL FACTOR 1.0 IPRN +INTERNAL FACTOR 1.0 IPRN 0 1.0 1.0 1.0 1.0 1.0 1.0 0.0 1.0 1.0 1.0 1.0 1.0 1.0 0.0 1.0 1.0 1.0 1.0 1.0 1.0 0.0 diff --git a/examples/data/mf6/test006_gwf3/flow.ic b/examples/data/mf6/test006_gwf3/flow.ic index 3d1b93d931..ce4024e3fa 100644 --- a/examples/data/mf6/test006_gwf3/flow.ic +++ b/examples/data/mf6/test006_gwf3/flow.ic @@ -4,7 +4,7 @@ end options BEGIN GRIDDATA strt -INTERNAL FACTOR 1 IPRN +INTERNAL FACTOR 1 IPRN 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 diff --git a/pyproject.toml b/pyproject.toml index 95ecd5ac78..bb5ebb487b 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -24,6 +24,7 @@ classifiers = [ "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", + "Programming Language :: Python :: 3.12", "Topic :: Scientific/Engineering :: Hydrology", ] requires-python = ">=3.8" @@ -55,7 +56,6 @@ test = [ "pytest-cases", "pytest-cov", "pytest-dotenv", - "pytest-virtualenv", "pytest-xdist", "virtualenv" ] @@ -121,4 +121,4 @@ target_version = ["py38"] [tool.isort] profile = "black" src_paths = ["flopy"] -line_length = 79 +line_length = 79 \ No newline at end of file From 4eea1870cb27aa2976cbf5366a6c2cb942a2f0c4 Mon Sep 17 00:00:00 2001 From: wpbonelli Date: Mon, 13 Nov 2023 21:11:34 -0500 Subject: [PATCH 085/101] refactor(msfsr2): write sfr_botm_conflicts.chk to model workspace (#2002) --- flopy/modflow/mfsfr2.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/flopy/modflow/mfsfr2.py b/flopy/modflow/mfsfr2.py index c5035b9c48..26a7a9a24d 100644 --- a/flopy/modflow/mfsfr2.py +++ b/flopy/modflow/mfsfr2.py @@ -9,7 +9,7 @@ from numpy.lib import recfunctions from ..pakbase import Package -from ..utils import MfList, import_optional_dependency +from ..utils import MfList from ..utils.flopy_io import line_parse from ..utils.optionblock import OptionBlock from ..utils.recarray_utils import create_empty_recarray @@ -1175,7 +1175,7 @@ def assign_layers(self, adjust_botms=False, pad=1.0): ) header += "\n" - with open(logfile, "w") as log: + with open(os.path.join(self.parent.model_ws, logfile), "w") as log: log.write(header) a = np.array(l).transpose() for line in a: From 0f8b521407694442c3bb2e50ce3d4f6207f86e48 Mon Sep 17 00:00:00 2001 From: wpbonelli Date: Mon, 13 Nov 2023 21:17:27 -0500 Subject: [PATCH 086/101] refactor(shapefile_utils): warn if fieldname truncated per 10 char limit (#2003) --- flopy/export/shapefile_utils.py | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/flopy/export/shapefile_utils.py b/flopy/export/shapefile_utils.py index ffe8f527e1..41a6180ceb 100644 --- a/flopy/export/shapefile_utils.py +++ b/flopy/export/shapefile_utils.py @@ -9,7 +9,7 @@ import sys import warnings from pathlib import Path -from typing import Optional, Union +from typing import List, Optional, Union from warnings import warn import numpy as np @@ -471,20 +471,28 @@ def shape_attr_name(name, length=6, keep_layer=False): return n -def enforce_10ch_limit(names): +def enforce_10ch_limit(names: List[str], warnings: bool = True) -> List[str]: """Enforce 10 character limit for fieldnames. Add suffix for duplicate names starting at 0. Parameters ---------- names : list of strings + warnings : whether to warn if names are truncated Returns ------- list list of unique strings of len <= 10. """ - names = [n[:5] + n[-4:] + "_" if len(n) > 10 else n for n in names] + + def truncate(s): + name = s[:5] + s[-4:] + "_" + if warnings: + warn(f"Truncating shapefile fieldname {s} to {name}") + return name + + names = [truncate(n) if len(n) > 10 else n for n in names] dups = {x: names.count(x) for x in names} suffix = {n: list(range(cnt)) for n, cnt in dups.items() if cnt > 1} for i, n in enumerate(names): From 16183c3059f4a9bb10d74423500b566eb41d3d6e Mon Sep 17 00:00:00 2001 From: wpbonelli Date: Tue, 14 Nov 2023 10:45:21 -0500 Subject: [PATCH 087/101] fix(benchmarks): fix benchmark post-processing (#2004) --- .github/workflows/benchmark.yml | 32 ++++++++++++++++++++------------ 1 file changed, 20 insertions(+), 12 deletions(-) diff --git a/.github/workflows/benchmark.yml b/.github/workflows/benchmark.yml index 645b1d930a..d7a849614e 100644 --- a/.github/workflows/benchmark.yml +++ b/.github/workflows/benchmark.yml @@ -57,15 +57,13 @@ jobs: if: failure() with: name: failed-benchmark-${{ matrix.os }}-${{ matrix.python-version }}-${{ github.run_id }} - path: | - ./autotest/.failed/** + path: autotest/.failed/** - name: Upload benchmark result artifact uses: actions/upload-artifact@v3 with: name: benchmarks-${{ matrix.os }}-${{ matrix.python-version }}-${{ github.run_id }} - path: | - ./autotest/.benchmarks/**/*.json + path: autotest/.benchmarks/**/*.json post_benchmark: needs: @@ -96,11 +94,17 @@ jobs: - name: Download all artifacts uses: actions/download-artifact@v3 with: - path: ./autotest/.benchmarks + path: autotest/.benchmarks - name: Process benchmark results run: | - artifact_json=$(gh api -X GET -H "Accept: application/vnd.github+json" /repos/modflowpy/flopy/actions/artifacts) + repo="${{ github.repository }}" + path="autotest/.benchmarks" + + # list benchmark artifacts + artifact_json=$(gh api -X GET -H "Accept: application/vnd.github+json" /repos/$repo/actions/artifacts) + + # get artifact ids and download artifacts get_artifact_ids=" import json import sys @@ -113,12 +117,16 @@ jobs: " echo $artifact_json \ | python -c "$get_artifact_ids" \ - | xargs -I@ bash -c "gh api -H 'Accept: application/vnd.github+json' /repos/modflowpy/flopy/actions/artifacts/@/zip >> ./autotest/.benchmarks/@.zip" - zipfiles=( ./autotest/.benchmarks/*.zip ) + | xargs -I@ bash -c "gh api -H 'Accept: application/vnd.github+json' /repos/$repo/actions/artifacts/@/zip >> $path/@.zip" + + # unzip artifacts + zipfiles=( $path/*.zip ) if (( ${#zipfiles[@]} )); then - unzip -o './autotest/.benchmarks/*.zip' -d ./autotest/.benchmarks + unzip -o "$path/*.zip" -d $path fi - python ./scripts/process_benchmarks.py ./autotest/.benchmarks ./autotest/.benchmarks + + # process benchmarks + python scripts/process_benchmarks.py $path $path env: GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} @@ -127,5 +135,5 @@ jobs: with: name: benchmarks-${{ github.run_id }} path: | - ./autotest/.benchmarks/*.csv - ./autotest/.benchmarks/*.png + autotest/.benchmarks/*.csv + autotest/.benchmarks/*.png From b34f154362320059d92050b54014ec1fc9c9f09f Mon Sep 17 00:00:00 2001 From: jdhughes-usgs Date: Tue, 14 Nov 2023 14:25:33 -0600 Subject: [PATCH 088/101] feat(MfSimulationList): add functionality to parse the mfsim.lst file (#2005) * add testing for MfSimulationList --- autotest/test_mfsimlist.py | 108 ++++++++++++ flopy/mf6/utils/__init__.py | 1 + flopy/mf6/utils/mfsimlistfile.py | 288 +++++++++++++++++++++++++++++++ 3 files changed, 397 insertions(+) create mode 100644 autotest/test_mfsimlist.py create mode 100644 flopy/mf6/utils/mfsimlistfile.py diff --git a/autotest/test_mfsimlist.py b/autotest/test_mfsimlist.py new file mode 100644 index 0000000000..e45a046e89 --- /dev/null +++ b/autotest/test_mfsimlist.py @@ -0,0 +1,108 @@ +import numpy as np +import pytest +from autotest.conftest import get_example_data_path +from modflow_devtools.markers import requires_exe + +import flopy +from flopy.mf6 import MFSimulation + + +def base_model(sim_path): + load_path = get_example_data_path() / "mf6-freyberg" + + sim = MFSimulation.load(sim_ws=load_path) + sim.set_sim_path(sim_path) + sim.write_simulation() + sim.run_simulation() + + return sim + + +@pytest.mark.xfail +def test_mfsimlist_nofile(function_tmpdir): + mfsimlst = flopy.mf6.utils.MfSimulationList(function_tmpdir / "fail.lst") + + +@requires_exe("mf6") +def test_mfsimlist_normal(function_tmpdir): + sim = base_model(function_tmpdir) + mfsimlst = flopy.mf6.utils.MfSimulationList(function_tmpdir / "mfsim.lst") + assert mfsimlst.is_normal_termination, "model did not terminate normally" + + +@pytest.mark.xfail +def test_mfsimlist_runtime_fail(function_tmpdir): + sim = base_model(function_tmpdir) + mfsimlst = flopy.mf6.utils.MfSimulationList(function_tmpdir / "mfsim.lst") + runtime_sec = mfsimlst.get_runtime(units="abc") + + +@requires_exe("mf6") +def test_mfsimlist_runtime(function_tmpdir): + sim = base_model(function_tmpdir) + mfsimlst = flopy.mf6.utils.MfSimulationList(function_tmpdir / "mfsim.lst") + for sim_timer in ("elapsed", "formulate", "solution"): + runtime_sec = mfsimlst.get_runtime(simulation_timer=sim_timer) + if not np.isnan(runtime_sec): + runtime_min = mfsimlst.get_runtime( + units="minutes", simulation_timer=sim_timer + ) + assert runtime_sec / 60.0 == runtime_min, ( + f"model {sim_timer} time conversion from " + + "sec to minutes does not match" + ) + + runtime_hrs = mfsimlst.get_runtime( + units="hours", simulation_timer=sim_timer + ) + assert runtime_min / 60.0 == runtime_hrs, ( + f"model {sim_timer} time conversion from " + + "minutes to hours does not match" + ) + + +@requires_exe("mf6") +def test_mfsimlist_iterations(function_tmpdir): + it_outer_answer = 13 + it_total_answer = 413 + + sim = base_model(function_tmpdir) + mfsimlst = flopy.mf6.utils.MfSimulationList(function_tmpdir / "mfsim.lst") + + it_outer = mfsimlst.get_outer_iterations() + assert it_outer == it_outer_answer, ( + f"outer iterations is not equal to {it_outer_answer} " + + f"({it_outer})" + ) + + it_total = mfsimlst.get_total_iterations() + assert it_total == it_total_answer, ( + f"total iterations is not equal to {it_total_answer} " + + f"({it_total})" + ) + + +@requires_exe("mf6") +def test_mfsimlist_memory(function_tmpdir): + virtual_answer = 0.0 + + sim = base_model(function_tmpdir) + mfsimlst = flopy.mf6.utils.MfSimulationList(function_tmpdir / "mfsim.lst") + + total_memory = mfsimlst.get_memory_usage() + assert total_memory > 0.0, ( + f"total memory is not greater than 0.0 " + f"({total_memory})" + ) + + virtual_memory = mfsimlst.get_memory_usage(virtual=True) + if not np.isnan(virtual_memory): + assert virtual_memory == virtual_answer, ( + f"virtual memory is not equal to {virtual_answer} " + + f"({virtual_memory})" + ) + + non_virtual_memory = mfsimlst.get_non_virtual_memory_usage() + assert total_memory == non_virtual_memory, ( + f"total memory ({total_memory}) " + + f"does not equal non-virtual memory ({non_virtual_memory})" + ) diff --git a/flopy/mf6/utils/__init__.py b/flopy/mf6/utils/__init__.py index 8abfe1ddc0..dd19d3a5d2 100644 --- a/flopy/mf6/utils/__init__.py +++ b/flopy/mf6/utils/__init__.py @@ -2,5 +2,6 @@ from .binarygrid_util import MfGrdFile from .generate_classes import generate_classes from .lakpak_utils import get_lak_connections +from .mfsimlistfile import MfSimulationList from .model_splitter import Mf6Splitter from .postprocessing import get_residuals, get_structured_faceflows diff --git a/flopy/mf6/utils/mfsimlistfile.py b/flopy/mf6/utils/mfsimlistfile.py new file mode 100644 index 0000000000..387da10be7 --- /dev/null +++ b/flopy/mf6/utils/mfsimlistfile.py @@ -0,0 +1,288 @@ +import os +import pathlib as pl +import re + +import numpy as np + + +class MfSimulationList: + def __init__(self, file_name: os.PathLike): + # Set up file reading + if isinstance(file_name, str): + file_name = pl.Path(file_name) + if not file_name.is_file(): + raise FileNotFoundError(f"file_name `{file_name}` not found") + self.file_name = file_name + self.f = open(file_name, "r", encoding="ascii", errors="replace") + + @property + def is_normal_termination(self) -> bool: + """ + Determine if the simulation terminated normally + + Returns + ------- + success: bool + Boolean indicating if the simulation terminated normally + + """ + # rewind the file + self._rewind_file() + + seekpoint = self._seek_to_string("Normal termination of simulation.") + self.f.seek(seekpoint) + line = self.f.readline() + if line == "": + success = False + else: + success = True + return success + + def get_runtime( + self, units: str = "seconds", simulation_timer: str = "elapsed" + ) -> float: + """ + Get model runtimes from the simulation list file. + + Parameters + ---------- + units : str + Units in which to return the timer. Acceptable values are + 'seconds', 'minutes', 'hours' (default is 'seconds') + simulation_timer : str + Timer to return. Acceptable values are 'elapsed', 'formulate', + 'solution' (default is 'elapsed') + + Returns + ------- + out : float + Floating point value with the runtime in requested units. Returns + NaN if runtime not found in list file + + """ + UNITS = ( + "seconds", + "minutes", + "hours", + ) + TIMERS = ( + "elapsed", + "formulate", + "solution", + ) + TIMERS_DICT = { + "elapsed": "Elapsed run time:", + "formulate": "Total formulate time:", + "solution": "Total solution time:", + } + + simulation_timer = simulation_timer.lower() + if simulation_timer not in TIMERS: + msg = ( + "simulation_timers input variable must be " + + " ,".join(TIMERS) + + f": {simulation_timer} was specified." + ) + raise ValueError(msg) + + units = units.lower() + if units not in UNITS: + msg = ( + "units input variable must be " + + " ,".join(UNITS) + + f": {units} was specified." + ) + raise ValueError(msg) + + # rewind the file + self._rewind_file() + + if simulation_timer == "elapsed": + seekpoint = self._seek_to_string(TIMERS_DICT[simulation_timer]) + self.f.seek(seekpoint) + line = self.f.readline().strip() + if line == "": + return np.nan + + # yank out the floating point values from the Elapsed run time string + times = list(map(float, re.findall(r"[+-]?[0-9.]+", line))) + # pad an array with zeros and times with + # [days, hours, minutes, seconds] + times = np.array([0 for _ in range(4 - len(times))] + times) + # convert all to seconds + time2sec = np.array([24 * 60 * 60, 60 * 60, 60, 1]) + times_sec = np.sum(times * time2sec) + else: + seekpoint = self._seek_to_string(TIMERS_DICT[simulation_timer]) + line = self.f.readline().strip() + if line == "": + return np.nan + times_sec = float(line.split()[3]) + # return in the requested units + if units == "seconds": + return times_sec + elif units == "minutes": + return times_sec / 60.0 + elif units == "hours": + return times_sec / 60.0 / 60.0 + + def get_outer_iterations(self) -> int: + """ + Get the total outer iterations from the simulation list file. + + Parameters + ---------- + + Returns + ------- + outer_iterations : float + Sum of all TOTAL ITERATIONS found in the list file + + """ + # initialize total_iterations + outer_iterations = 0 + + # rewind the file + self._rewind_file() + + while True: + seekpoint = self._seek_to_string("CALLS TO NUMERICAL SOLUTION IN") + self.f.seek(seekpoint) + line = self.f.readline() + if line == "": + break + outer_iterations += int(line.split()[0]) + + return outer_iterations + + def get_total_iterations(self) -> int: + """ + Get the total number of iterations from the simulation list file. + + Parameters + ---------- + + Returns + ------- + total_iterations : float + Sum of all TOTAL ITERATIONS found in the list file + + """ + # initialize total_iterations + total_iterations = 0 + + # rewind the file + self._rewind_file() + + while True: + seekpoint = self._seek_to_string("TOTAL ITERATIONS") + self.f.seek(seekpoint) + line = self.f.readline() + if line == "": + break + total_iterations += int(line.split()[0]) + + return total_iterations + + def get_memory_usage(self, virtual=False) -> float: + """ + Get the simulation memory usage from the simulation list file. + + Parameters + ---------- + virtual : bool + Return total or virtual memory usage (default is total) + + Returns + ------- + memory_usage : float + Total memory usage for a simulation (in Gigabytes) + + """ + # initialize total_iterations + memory_usage = 0.0 + + # rewind the file + self._rewind_file() + + tags = ( + "MEMORY MANAGER TOTAL STORAGE BY DATA TYPE", + "Total", + "Virtual", + ) + + while True: + seekpoint = self._seek_to_string(tags[0]) + self.f.seek(seekpoint) + line = self.f.readline() + if line == "": + break + units = line.split()[-1] + if units == "GIGABYTES": + conversion = 1.0 + elif units == "MEGABYTES": + conversion = 1e-3 + elif units == "KILOBYTES": + conversion = 1e-6 + elif units == "BYTES": + conversion = 1e-9 + else: + raise ValueError(f"Unknown memory unit '{units}'") + + if virtual: + tag = tags[2] + else: + tag = tags[1] + seekpoint = self._seek_to_string(tag) + self.f.seek(seekpoint) + line = self.f.readline() + if line == "": + break + memory_usage = float(line.split()[-1]) * conversion + + return memory_usage + + def get_non_virtual_memory_usage(self): + """ + + Returns + ------- + non_virtual: float + Non-virtual memory usage, which is the difference between the + total and virtual memory usage + + """ + return self.get_memory_usage() - self.get_memory_usage(virtual=True) + + def _seek_to_string(self, s): + """ + Parameters + ---------- + s : str + Seek through the file to the next occurrence of s. Return the + seek location when found. + + Returns + ------- + seekpoint : int + Next location of the string + + """ + while True: + seekpoint = self.f.tell() + line = self.f.readline() + if line == "": + break + if s in line: + break + return seekpoint + + def _rewind_file(self): + """ + Rewind the simulation list file + + Returns + ------- + + """ + self.f.seek(0) From 7cc5e6f0b4861e2f9d85eb1e25592721d0427227 Mon Sep 17 00:00:00 2001 From: mjr-deltares <45555666+mjr-deltares@users.noreply.github.com> Date: Wed, 15 Nov 2023 18:19:23 +0100 Subject: [PATCH 089/101] fix(MfSimulationList): add missing seek to get_runtime method (#2006) * fix: add missing seek to get_runtime method * too early for that assert --- autotest/test_mfsimlist.py | 1 + flopy/mf6/utils/mfsimlistfile.py | 1 + 2 files changed, 2 insertions(+) diff --git a/autotest/test_mfsimlist.py b/autotest/test_mfsimlist.py index e45a046e89..5a4f1661ab 100644 --- a/autotest/test_mfsimlist.py +++ b/autotest/test_mfsimlist.py @@ -43,6 +43,7 @@ def test_mfsimlist_runtime(function_tmpdir): mfsimlst = flopy.mf6.utils.MfSimulationList(function_tmpdir / "mfsim.lst") for sim_timer in ("elapsed", "formulate", "solution"): runtime_sec = mfsimlst.get_runtime(simulation_timer=sim_timer) + if not np.isnan(runtime_sec): runtime_min = mfsimlst.get_runtime( units="minutes", simulation_timer=sim_timer diff --git a/flopy/mf6/utils/mfsimlistfile.py b/flopy/mf6/utils/mfsimlistfile.py index 387da10be7..713d244d80 100644 --- a/flopy/mf6/utils/mfsimlistfile.py +++ b/flopy/mf6/utils/mfsimlistfile.py @@ -114,6 +114,7 @@ def get_runtime( times_sec = np.sum(times * time2sec) else: seekpoint = self._seek_to_string(TIMERS_DICT[simulation_timer]) + self.f.seek(seekpoint) line = self.f.readline().strip() if line == "": return np.nan From 419ae0e0c15b4f95f6ea334fe49f79e29917130b Mon Sep 17 00:00:00 2001 From: langevin-usgs Date: Tue, 21 Nov 2023 10:49:44 -0600 Subject: [PATCH 090/101] fix(get_disu_kwargs): incorrect indexing of delr and delc (#2011) * fix(get_disu_kwargs): incorrect indexing of delr and delc * add vertices to the return of get_disu_kwargs * ensure nvert has proper value * handle delr/delc and ensure size is correct * isort * Modify to allow mf6 autotest to pass --- autotest/test_gridutil.py | 13 +++++++-- flopy/utils/gridutil.py | 56 ++++++++++++++++++++++++++++++++++----- 2 files changed, 61 insertions(+), 8 deletions(-) diff --git a/autotest/test_gridutil.py b/autotest/test_gridutil.py index c1288ff713..9be73a5430 100644 --- a/autotest/test_gridutil.py +++ b/autotest/test_gridutil.py @@ -82,11 +82,20 @@ def test_get_lni_infers_layer_count_when_int_ncpl(ncpl, nodes, expected): np.array([-10]), np.array([-30.0, -50.0]), ), + ( + 1, # nlay + 3, # nrow + 4, # ncol + np.array(4 * [4.]), # delr + np.array(3 * [3.]), # delc + np.array([-10]), # top + np.array([-30.0]), # botm + ), ], ) def test_get_disu_kwargs(nlay, nrow, ncol, delr, delc, tp, botm): kwargs = get_disu_kwargs( - nlay=nlay, nrow=nrow, ncol=ncol, delr=delr, delc=delc, tp=tp, botm=botm + nlay=nlay, nrow=nrow, ncol=ncol, delr=delr, delc=delc, tp=tp, botm=botm, return_vertices=True ) from pprint import pprint @@ -94,7 +103,7 @@ def test_get_disu_kwargs(nlay, nrow, ncol, delr, delc, tp, botm): pprint(kwargs["area"]) assert kwargs["nodes"] == nlay * nrow * ncol - assert kwargs["nvert"] == None + assert kwargs["nvert"] == (nrow + 1) * (ncol + 1) area = np.array([dr * dc for (dr, dc) in product(delr, delc)], dtype=float) area = np.array(nlay * [area]).flatten() diff --git a/flopy/utils/gridutil.py b/flopy/utils/gridutil.py index 000af76ce4..e1e6f146e5 100644 --- a/flopy/utils/gridutil.py +++ b/flopy/utils/gridutil.py @@ -6,7 +6,7 @@ import numpy as np -from .cvfdutil import get_disv_gridprops +from .cvfdutil import centroid_of_polygon, get_disv_gridprops def get_lni(ncpl, nodes) -> List[Tuple[int, int]]: @@ -68,6 +68,7 @@ def get_disu_kwargs( delc, tp, botm, + return_vertices=False, ): """ Create args needed to construct a DISU package for a regular @@ -89,11 +90,20 @@ def get_disu_kwargs( Top elevation(s) of cells in the model's top layer botm : numpy.ndarray Bottom elevation(s) for each layer + return_vertices: bool + If true, then include vertices and cell2d in kwargs """ def get_nn(k, i, j): return k * nrow * ncol + i * ncol + j + if not isinstance(delr, np.ndarray): + delr = np.array(delr) + if not isinstance(delc, np.ndarray): + delc = np.array(delc) + assert delr.shape == (ncol,) + assert delc.shape == (nrow,) + nodes = nlay * nrow * ncol iac = np.zeros((nodes), dtype=int) ja = [] @@ -110,8 +120,8 @@ def get_nn(k, i, j): n = get_nn(k, i, j) ja.append(n) iac[n] += 1 - area[n] = delr[i] * delc[j] - ihc.append(n + 1) + area[n] = delr[j] * delc[i] + ihc.append(k + 1) # put layer in diagonal for flopy plotting cl12.append(n + 1) hwva.append(n + 1) if k == 0: @@ -126,7 +136,7 @@ def get_nn(k, i, j): ihc.append(0) dz = botm[k - 1] - botm[k] cl12.append(0.5 * dz) - hwva.append(delr[i] * delc[j]) + hwva.append(delr[j] * delc[i]) # back if i > 0: ja.append(get_nn(k, i - 1, j)) @@ -165,14 +175,45 @@ def get_nn(k, i, j): else: dz = botm[k - 1] - botm[k] cl12.append(0.5 * dz) - hwva.append(delr[i] * delc[j]) + hwva.append(delr[j] * delc[i]) ja = np.array(ja, dtype=int) nja = ja.shape[0] hwva = np.array(hwva, dtype=float) + + # build vertices + nvert = None + if return_vertices: + xv = np.cumsum(delr) + xv = np.array([0] + list(xv)) + ymax = delc.sum() + yv = np.cumsum(delc) + yv = ymax - np.array([0] + list(yv)) + xmg, ymg = np.meshgrid(xv, yv) + nvert = xv.shape[0] * yv.shape[0] + verts = np.array(list(zip(xmg.flatten(), ymg.flatten()))) + vertices = [] + for i in range(nvert): + vertices.append((i, verts[i, 0], verts[i, 1])) + + cell2d = [] + icell = 0 + for k in range(nlay): + for i in range(nrow): + for j in range(ncol): + iv0 = j + i * (ncol + 1) # upper left vertex + iv1 = iv0 + 1 # upper right vertex + iv3 = iv0 + ncol + 1 # lower left vertex + iv2 = iv3 + 1 # lower right vertex + iverts = [iv0, iv1, iv2, iv3] + vlist = [(verts[iv, 0], verts[iv, 1]) for iv in iverts] + xc, yc = centroid_of_polygon(vlist) + cell2d.append([icell, xc, yc, len(iverts)] + iverts) + icell += 1 + kw = {} kw["nodes"] = nodes kw["nja"] = nja - kw["nvert"] = None + kw["nvert"] = nvert kw["top"] = top kw["bot"] = bot kw["area"] = area @@ -181,6 +222,9 @@ def get_nn(k, i, j): kw["ihc"] = ihc kw["cl12"] = cl12 kw["hwva"] = hwva + if return_vertices: + kw["vertices"] = vertices + kw["cell2d"] = cell2d return kw From 6e3c7911c1e94a9f8c36563662471c68a7aeeefc Mon Sep 17 00:00:00 2001 From: langevin-usgs Date: Tue, 21 Nov 2023 14:43:32 -0600 Subject: [PATCH 091/101] fix(PlotCrossSection): boundary conditions not plotting for DISU (#2012) --- flopy/plot/crosssection.py | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/flopy/plot/crosssection.py b/flopy/plot/crosssection.py index 1e50ee69e8..4ac019a0c0 100644 --- a/flopy/plot/crosssection.py +++ b/flopy/plot/crosssection.py @@ -877,14 +877,18 @@ def plot_bc( else: idx = mflist["node"] - if len(self.mg.shape) != 3: - plotarray = np.zeros((self._nlay, self._ncpl), dtype=int) - plotarray[tuple(idx)] = 1 - else: + if len(self.mg.shape) == 3: plotarray = np.zeros( (self.mg.nlay, self.mg.nrow, self.mg.ncol), dtype=int ) plotarray[idx[0], idx[1], idx[2]] = 1 + elif len(self.mg.shape) == 2: + plotarray = np.zeros((self._nlay, self._ncpl), dtype=int) + plotarray[tuple(idx)] = 1 + else: + plotarray = np.zeros(self._ncpl, dtype=int) + idx = idx.flatten() + plotarray[idx] = 1 plotarray = np.ma.masked_equal(plotarray, 0) if color is None: From d0546ce79a4aa9bde92641ff253dd35e21bcee2a Mon Sep 17 00:00:00 2001 From: Mike Taves Date: Wed, 22 Nov 2023 14:34:12 +1300 Subject: [PATCH 092/101] docs(notebooks): fix cell center for Simple MODFLOW 6 Model (#2013) --- .docs/Notebooks/mf6_simple_model_example.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.docs/Notebooks/mf6_simple_model_example.py b/.docs/Notebooks/mf6_simple_model_example.py index 8c00e30caa..6c7da6deb4 100644 --- a/.docs/Notebooks/mf6_simple_model_example.py +++ b/.docs/Notebooks/mf6_simple_model_example.py @@ -125,7 +125,7 @@ # must be entered as a tuple as the first entry. # Remember that these must be zero-based indices! chd_rec = [] -chd_rec.append(((0, int(N / 4), int(N / 4)), h2)) +chd_rec.append(((0, int(N / 2), int(N / 2)), h2)) for layer in range(0, Nlay): for row_col in range(0, N): chd_rec.append(((layer, row_col, 0), h1)) From be104c4a38d2b1f4acfbdd5a49a5b3ddb325f852 Mon Sep 17 00:00:00 2001 From: wpbonelli Date: Wed, 22 Nov 2023 10:31:05 -0500 Subject: [PATCH 093/101] feat(modflow): support dataframe for pkg data (#2010) * support pandas dataframe for node_data, stress_period_data etc for flopy.modflow.* pkgs * just convert to recarray, don't reimplement data storage with pandas like for mf6 * add tests with mnw and wel --- autotest/test_mnw.py | 22 +++++++-- autotest/test_modflow.py | 96 +++++++++++++++++++++++++++++++++++++-- flopy/modflow/mfag.py | 17 +++++-- flopy/modflow/mfchd.py | 3 +- flopy/modflow/mfdrn.py | 3 +- flopy/modflow/mfdrt.py | 3 +- flopy/modflow/mffhb.py | 9 +++- flopy/modflow/mfgage.py | 5 +- flopy/modflow/mfghb.py | 3 +- flopy/modflow/mfhyd.py | 5 +- flopy/modflow/mfmnw2.py | 7 +++ flopy/modflow/mfriv.py | 3 +- flopy/modflow/mfsfr2.py | 8 +++- flopy/modflow/mfstr.py | 7 ++- flopy/modflow/mfwel.py | 3 +- flopy/utils/mflistfile.py | 1 - flopy/utils/util_list.py | 51 ++++++++++----------- 17 files changed, 185 insertions(+), 61 deletions(-) diff --git a/autotest/test_mnw.py b/autotest/test_mnw.py index 266c55a14b..f50036dcdb 100644 --- a/autotest/test_mnw.py +++ b/autotest/test_mnw.py @@ -23,7 +23,7 @@ def mnw1_path(example_data_path): return example_data_path / "mf2005_test" -def test_load(function_tmpdir, example_data_path, mnw2_examples_path): +def test_load(function_tmpdir, mnw2_examples_path): """t027 test load of MNW2 Package""" # load in the test problem (1 well, 3 stress periods) m = Modflow.load( @@ -94,7 +94,8 @@ def test_mnw1_load_write(function_tmpdir, mnw1_path): assert np.array_equal(v, m2.mnw1.stress_period_data[k]) -def test_make_package(function_tmpdir): +@pytest.mark.parametrize("dataframe", [True, False]) +def test_make_package(function_tmpdir, dataframe): """t027 test make MNW2 Package""" ws = function_tmpdir m4 = Modflow("mnw2example", model_ws=ws) @@ -195,6 +196,9 @@ def test_make_package(function_tmpdir): ).view(np.recarray), } + if dataframe: + node_data = pd.DataFrame(node_data) + mnw2_4 = ModflowMnw2( model=m4, mnwmax=2, @@ -257,6 +261,9 @@ def test_make_package(function_tmpdir): ).view(np.recarray), } + if dataframe: + node_data = pd.DataFrame(node_data) + mnw2_4 = ModflowMnw2( model=m4, mnwmax=2, @@ -294,7 +301,8 @@ def test_make_package(function_tmpdir): ) -def test_mnw2_create_file(function_tmpdir): +@pytest.mark.parametrize("dataframe", [True, False]) +def test_mnw2_create_file(function_tmpdir, dataframe): """ Test for issue #556, Mnw2 crashed if wells have multiple node lengths @@ -341,8 +349,12 @@ def test_mnw2_create_file(function_tmpdir): wellids[i], nnodes=nlayers[i], nper=len(stress_period_data.index), - node_data=node_data.to_records(index=False), - stress_period_data=stress_period_data.to_records(index=False), + node_data=node_data.to_records(index=False) + if not dataframe + else node_data, + stress_period_data=stress_period_data.to_records(index=False) + if not dataframe + else stress_period_data, ) wells.append(wl) diff --git a/autotest/test_modflow.py b/autotest/test_modflow.py index 6f628aad68..6d8e6ffedf 100644 --- a/autotest/test_modflow.py +++ b/autotest/test_modflow.py @@ -3,8 +3,10 @@ import os import shutil from pathlib import Path +from typing import Dict import numpy as np +import pandas as pd import pytest from autotest.conftest import get_example_data_path from modflow_devtools.markers import excludes_platform, requires_exe @@ -261,7 +263,7 @@ def test_mt_modelgrid(function_tmpdir): assert np.array_equal(swt.modelgrid.idomain, ml.modelgrid.idomain) -@requires_exe("mp7") +@requires_exe("mp7", "mf2005") def test_exe_selection(example_data_path, function_tmpdir): model_path = example_data_path / "freyberg" namfile_path = model_path / "freyberg.nam" @@ -1287,7 +1289,91 @@ def test_load_with_list_reader(function_tmpdir): assert np.array_equal(originalwelra, m2.wel.stress_period_data[0]) -def get_basic_modflow_model(ws, name): +@requires_exe("mf2005") +@pytest.mark.parametrize( + "container", + [ + str(np.recarray), + str(pd.DataFrame), + str(Dict[int, np.recarray]), + str(Dict[int, pd.DataFrame]), + ], +) +def test_pkg_data_containers(function_tmpdir, container): + """Test various containers for package data (list, ndarray, recarray, dataframe, dict of such)""" + + name = "pkg_data" + ws = function_tmpdir + m = Modflow(name, model_ws=ws) + size = 100 + nlay = 10 + nper = 1 + + # grid discretization + dis = ModflowDis( + m, + nper=nper, + nlay=nlay, + nrow=size, + ncol=size, + top=nlay, + botm=list(range(nlay)), + ) + + # recharge pkg + rch = ModflowRch( + m, rech={k: 0.001 - np.cos(k) * 0.001 for k in range(nper)} + ) + + # well pkg, setup data per 'container' parameter + # to test support for various container types + ra = ModflowWel.get_empty(size**2) + ra_per = ra.copy() + ra_per["k"] = 1 + ra_per["i"] = ( + (np.ones((size, size)) * np.arange(size)) + .transpose() + .ravel() + .astype(int) + ) + ra_per["j"] = list(range(size)) * size + wel_dtype = np.dtype( + [ + ("k", int), + ("i", int), + ("j", int), + ("flux", np.float32), + ] + ) + df_per = pd.DataFrame(ra_per) + if "'numpy.recarray'" in container: + well_spd = ra_per + elif "'pandas.core.frame.DataFrame'" in container: + well_spd = df_per + elif "Dict[int, numpy.recarray]" in container: + well_spd = {} + well_spd[0] = ra_per + elif "Dict[int, pandas.core.frame.DataFrame]" in container: + well_spd = {} + well_spd[0] = df_per + wel = ModflowWel(m, stress_period_data=well_spd) + + # basic pkg + bas = ModflowBas(m) + + # solver + pcg = ModflowPcg(m) + + # output control pkg + oc = ModflowOc(m) + + # write and run the model + m.write_input() + success, _ = m.run_model(silent=False) + assert success + + +def get_perftest_model(ws, name): m = Modflow(name, model_ws=ws) size = 100 @@ -1338,20 +1424,20 @@ def get_basic_modflow_model(ws, name): @pytest.mark.slow def test_model_init_time(function_tmpdir, benchmark): name = inspect.getframeinfo(inspect.currentframe()).function - benchmark(lambda: get_basic_modflow_model(ws=function_tmpdir, name=name)) + benchmark(lambda: get_perftest_model(ws=function_tmpdir, name=name)) @pytest.mark.slow def test_model_write_time(function_tmpdir, benchmark): name = inspect.getframeinfo(inspect.currentframe()).function - model = get_basic_modflow_model(ws=function_tmpdir, name=name) + model = get_perftest_model(ws=function_tmpdir, name=name) benchmark(lambda: model.write_input()) @pytest.mark.slow def test_model_load_time(function_tmpdir, benchmark): name = inspect.getframeinfo(inspect.currentframe()).function - model = get_basic_modflow_model(ws=function_tmpdir, name=name) + model = get_perftest_model(ws=function_tmpdir, name=name) model.write_input() benchmark( lambda: Modflow.load( diff --git a/flopy/modflow/mfag.py b/flopy/modflow/mfag.py index 985ee27e08..ede221485d 100644 --- a/flopy/modflow/mfag.py +++ b/flopy/modflow/mfag.py @@ -11,6 +11,7 @@ import os import numpy as np +import pandas as pd from ..pakbase import Package from ..utils.flopy_io import multi_line_strip @@ -29,9 +30,9 @@ class ModflowAg(Package): model object options : flopy.utils.OptionBlock object option block object - time_series : np.recarray + time_series : np.recarray or pd.DataFrame numpy recarray for the time series block - well_list : np.recarray + well_list : np.recarray or pd.DataFrame recarray of the well_list block irrdiversion : dict {per: np.recarray} dictionary of the irrdiversion block @@ -269,8 +270,16 @@ def __init__( else: self.options = OptionBlock("", ModflowAg) - self.time_series = time_series - self.well_list = well_list + self.time_series = ( + time_series.to_records(index=False) + if isinstance(time_series, pd.DataFrame) + else time_series + ) + self.well_list = ( + well_list.to_records(index=False) + if isinstance(well_list, pd.DataFrame) + else well_list + ) self.irrdiversion = irrdiversion self.irrwell = irrwell self.supwell = supwell diff --git a/flopy/modflow/mfchd.py b/flopy/modflow/mfchd.py index fb47d9ad5a..530b606da0 100644 --- a/flopy/modflow/mfchd.py +++ b/flopy/modflow/mfchd.py @@ -24,8 +24,7 @@ class ModflowChd(Package): model : model object The model object (of type :class:`flopy.modflow.mf.Modflow`) to which this package will be added. - stress_period_data : list of boundaries, recarrays, or dictionary of - boundaries. + stress_period_data : list, recarray, dataframe, or dictionary of boundaries. Each chd cell is defined through definition of layer (int), row (int), column (int), shead (float), ehead (float) diff --git a/flopy/modflow/mfdrn.py b/flopy/modflow/mfdrn.py index c94f9a39d0..c4614f5aa7 100644 --- a/flopy/modflow/mfdrn.py +++ b/flopy/modflow/mfdrn.py @@ -27,8 +27,7 @@ class ModflowDrn(Package): A flag that is used to determine if cell-by-cell budget data should be saved. If ipakcb is non-zero cell-by-cell budget data will be saved. (default is None). - stress_period_data : list of boundaries, recarrays, or dictionary of - boundaries. + stress_period_data : list, recarray, dataframe or dictionary of boundaries. Each drain cell is defined through definition of layer(int), row(int), column(int), elevation(float), conductance(float). diff --git a/flopy/modflow/mfdrt.py b/flopy/modflow/mfdrt.py index 8bb2083c76..781ddcd3dd 100644 --- a/flopy/modflow/mfdrt.py +++ b/flopy/modflow/mfdrt.py @@ -27,8 +27,7 @@ class ModflowDrt(Package): A flag that is used to determine if cell-by-cell budget data should be saved. If ipakcb is non-zero cell-by-cell budget data will be saved. (default is None). - stress_period_data : list of boundaries, recarrays, or dictionary of - boundaries. + stress_period_data : list, recarray, dataframe or dictionary of boundaries. Each drain return cell is defined through definition of layer(int), row(int), column(int), elevation(float), conductance(float), layerR (int) , rowR (int), colR (int) and rfprop (float). diff --git a/flopy/modflow/mffhb.py b/flopy/modflow/mffhb.py index 1e20e35b6b..64764078e3 100644 --- a/flopy/modflow/mffhb.py +++ b/flopy/modflow/mffhb.py @@ -8,6 +8,7 @@ """ import numpy as np +import pandas as pd from ..pakbase import Package from ..utils import read1d @@ -64,7 +65,7 @@ class ModflowFhb(Package): (default is 0.0) cnstm5 : float A constant multiplier for data list flwrat. (default is 1.0) - ds5 : list or numpy array or recarray + ds5 : list or numpy array or recarray or pandas dataframe Each FHB flwrat cell (dataset 5) is defined through definition of layer(int), row(int), column(int), iaux(int), flwrat[nbdtime](float). There should be nflw entries. (default is None) @@ -81,7 +82,7 @@ class ModflowFhb(Package): cnstm7 : float A constant multiplier for data list sbhedt. (default is 1.0) - ds7 : list or numpy array or recarray + ds7 : list or numpy array or recarray or pandas dataframe Each FHB sbhed cell (dataset 7) is defined through definition of layer(int), row(int), column(int), iaux(int), sbhed[nbdtime](float). There should be nhed entries. (default is None) @@ -211,6 +212,8 @@ def __init__( raise TypeError(msg) elif isinstance(ds5, list): ds5 = np.array(ds5) + elif isinstance(ds5, pd.DataFrame): + ds5 = ds5.to_records(index=False) # convert numpy array to a recarray if ds5.dtype != dtype: ds5 = np.core.records.fromarrays(ds5.transpose(), dtype=dtype) @@ -228,6 +231,8 @@ def __init__( raise TypeError(msg) elif isinstance(ds7, list): ds7 = np.array(ds7) + elif isinstance(ds7, pd.DataFrame): + ds7 = ds7.to_records(index=False) # convert numpy array to a recarray if ds7.dtype != dtype: ds7 = np.core.records.fromarrays(ds7.transpose(), dtype=dtype) diff --git a/flopy/modflow/mfgage.py b/flopy/modflow/mfgage.py index 16af8b5b14..48953cd0de 100644 --- a/flopy/modflow/mfgage.py +++ b/flopy/modflow/mfgage.py @@ -10,6 +10,7 @@ import os import numpy as np +import pandas as pd from ..pakbase import Package from ..utils import read_fixed_var, write_fixed_var @@ -27,7 +28,7 @@ class ModflowGage(Package): this package will be added. numgage : int The total number of gages included in the gage file (default is 0). - gage_data : list or numpy array + gage_data : list or numpy array or recarray or pandas dataframe data for dataset 2a and 2b in the gage package. If a list is provided then the list includes 2 to 3 entries (LAKE UNIT [OUTTYPE]) for each LAK Package entry and 4 entries (GAGESEG GAGERCH UNIT OUTTYPE) for @@ -132,6 +133,8 @@ def __init__( gage_data = np.core.records.fromarrays( gage_data.transpose(), dtype=dtype ) + elif isinstance(gage_data, pd.DataFrame): + gage_data = gage_data.to_records(index=False) elif isinstance(gage_data, list): d = ModflowGage.get_empty(ncells=numgage) for n in range(len(gage_data)): diff --git a/flopy/modflow/mfghb.py b/flopy/modflow/mfghb.py index 7b8d6b119d..903c88c646 100644 --- a/flopy/modflow/mfghb.py +++ b/flopy/modflow/mfghb.py @@ -27,8 +27,7 @@ class ModflowGhb(Package): A flag that is used to determine if cell-by-cell budget data should be saved. If ipakcb is non-zero cell-by-cell budget data will be saved. (default is 0). - stress_period_data : list of boundaries, recarray of boundaries or, - dictionary of boundaries. + stress_period_data : list, recarray, dataframe or dictionary of boundaries. Each ghb cell is defined through definition of layer(int), row(int), column(int), stage(float), conductance(float) diff --git a/flopy/modflow/mfhyd.py b/flopy/modflow/mfhyd.py index a90ac12f2b..a2301bc1a4 100644 --- a/flopy/modflow/mfhyd.py +++ b/flopy/modflow/mfhyd.py @@ -8,6 +8,7 @@ """ import numpy as np +import pandas as pd from ..pakbase import Package from ..utils.recarray_utils import create_empty_recarray @@ -31,7 +32,7 @@ class ModflowHyd(Package): is a user-specified value that is output if a value cannot be computed at a hydrograph location. For example, the cell in which the hydrograph is located may be a no-flow cell. (default is -999.) - obsdata : list of lists, numpy array, or numpy recarray (nhyd, 7) + obsdata : list of lists, numpy array or recarray, or pandas dataframe (nhyd, 7) Each row of obsdata includes data defining pckg (3 character string), arr (2 character string), intyp (1 character string) klay (int), xl (float), yl (float), hydlbl (14 character string) for each @@ -158,6 +159,8 @@ def __init__( dtype = ModflowHyd.get_default_dtype() obs = ModflowHyd.get_empty(nhyd) + if isinstance(obsdata, pd.DataFrame): + obsdata = obsdata.to_records(index=False) if isinstance(obsdata, list): if len(obsdata) != nhyd: raise RuntimeError( diff --git a/flopy/modflow/mfmnw2.py b/flopy/modflow/mfmnw2.py index ae59bf9648..7a9a5548f5 100644 --- a/flopy/modflow/mfmnw2.py +++ b/flopy/modflow/mfmnw2.py @@ -2,6 +2,7 @@ import warnings import numpy as np +import pandas as pd from ..pakbase import Package from ..utils import MfList, check @@ -451,6 +452,8 @@ def __init__( # does this need to be Mflist? self.stress_period_data = self.get_empty_stress_period_data(nper) if stress_period_data is not None: + if isinstance(stress_period_data, pd.DataFrame): + stress_period_data = stress_period_data.to_records(index=False) for n in stress_period_data.dtype.names: self.stress_period_data[n] = stress_period_data[n] @@ -459,6 +462,8 @@ def __init__( np.abs(nnodes), aux_names=self.aux ) if node_data is not None: + if isinstance(node_data, pd.DataFrame): + node_data = node_data.to_records(index=False) for n in node_data.dtype.names: self.node_data[n] = node_data[n] # convert strings to lower case @@ -1054,6 +1059,8 @@ def __init__( self.node_data = self.get_empty_node_data(0, aux_names=aux) if node_data is not None: + if isinstance(node_data, pd.DataFrame): + node_data = node_data.to_records(index=False) self.node_data = self.get_empty_node_data( len(node_data), aux_names=aux ) diff --git a/flopy/modflow/mfriv.py b/flopy/modflow/mfriv.py index b734822389..a557da18a3 100644 --- a/flopy/modflow/mfriv.py +++ b/flopy/modflow/mfriv.py @@ -27,8 +27,7 @@ class ModflowRiv(Package): A flag that is used to determine if cell-by-cell budget data should be saved. If ipakcb is non-zero cell-by-cell budget data will be saved. (default is 0). - stress_period_data : list of boundaries, or recarray of boundaries, or - dictionary of boundaries. + stress_period_data : list, recarray, dataframe, or dictionary of boundaries. Each river cell is defined through definition of layer (int), row (int), column (int), stage (float), cond (float), rbot (float). diff --git a/flopy/modflow/mfsfr2.py b/flopy/modflow/mfsfr2.py index 26a7a9a24d..1287948070 100644 --- a/flopy/modflow/mfsfr2.py +++ b/flopy/modflow/mfsfr2.py @@ -172,13 +172,13 @@ class ModflowSfr2(Package): simulations (and would need to be converted to whatever units are being used in the particular simulation). (default is 0.0001; for MODFLOW-2005 simulations only when irtflg > 0) - reach_data : recarray + reach_data : recarray or dataframe Numpy record array of length equal to nstrm, with columns for each variable entered in item 2 (see SFR package input instructions). In following flopy convention, layer, row, column and node number (for unstructured grids) are zero-based; segment and reach are one-based. - segment_data : recarray + segment_data : recarray or dataframe Numpy record array of length equal to nss, with columns for each variable entered in items 6a, 6b and 6c (see SFR package input instructions). Segment numbers are one-based. @@ -431,6 +431,8 @@ def __init__( ) if segment_data is not None: # segment_data is a zero-d array + if isinstance(segment_data, pd.DataFrame): + segment_data = segment_data.to_records(index=False) if not isinstance(segment_data, dict): if len(segment_data.shape) == 0: segment_data = np.atleast_1d(segment_data) @@ -479,6 +481,8 @@ def __init__( # Dataset 2. self.reach_data = self.get_empty_reach_data(np.abs(self._nstrm)) if reach_data is not None: + if isinstance(reach_data, pd.DataFrame): + reach_data = reach_data.to_records(index=False) for n in reach_data.dtype.names: self.reach_data[n] = reach_data[n] diff --git a/flopy/modflow/mfstr.py b/flopy/modflow/mfstr.py index 7e16b2ee68..cb480f5235 100644 --- a/flopy/modflow/mfstr.py +++ b/flopy/modflow/mfstr.py @@ -8,6 +8,7 @@ """ import numpy as np +import pandas as pd from ..pakbase import Package from ..utils import MfList, read_fixed_var, write_fixed_var @@ -81,8 +82,8 @@ class ModflowStr(Package): datasets 6 and 8. The value for stress period data for a stress period can be an integer - (-1 or 0), a list of lists, a numpy array, or a numpy recarray. If - stress period data for a stress period contains an integer, a -1 + (-1 or 0), a list of lists, a numpy array or recarray, or a pandas + dataframe. If data for a stress period contains an integer, a -1 denotes data from the previous stress period will be reused and a 0 indicates there are no str reaches for this stress period. @@ -367,6 +368,8 @@ def __init__( for key, d in stress_period_data.items(): if isinstance(d, list): d = np.array(d) + if isinstance(d, pd.DataFrame): + d = d.to_records(index=False) if isinstance(d, np.recarray): e = ( "ModflowStr error: recarray dtype: {} does not match " diff --git a/flopy/modflow/mfwel.py b/flopy/modflow/mfwel.py index 0628bdc62e..69a51e42f5 100644 --- a/flopy/modflow/mfwel.py +++ b/flopy/modflow/mfwel.py @@ -28,8 +28,7 @@ class ModflowWel(Package): A flag that is used to determine if cell-by-cell budget data should be saved. If ipakcb is non-zero cell-by-cell budget data will be saved. (default is 0). - stress_period_data : list of boundaries, or recarray of boundaries, or - dictionary of boundaries + stress_period_data : list, recarray, dataframe or dictionary of boundaries. Each well is defined through definition of layer (int), row (int), column (int), flux (float). The simplest form is a dictionary with a lists of boundaries for each diff --git a/flopy/utils/mflistfile.py b/flopy/utils/mflistfile.py index 6ac362cef1..b24cab7a2a 100644 --- a/flopy/utils/mflistfile.py +++ b/flopy/utils/mflistfile.py @@ -12,7 +12,6 @@ import numpy as np import pandas as pd -from ..utils import import_optional_dependency from ..utils.flopy_io import get_ts_sp from ..utils.utils_def import totim_to_datetime diff --git a/flopy/utils/util_list.py b/flopy/utils/util_list.py index 2aa83be270..3eae337906 100644 --- a/flopy/utils/util_list.py +++ b/flopy/utils/util_list.py @@ -14,7 +14,6 @@ import pandas as pd from ..datbase import DataInterface, DataListInterface, DataType -from ..utils import import_optional_dependency from ..utils.recarray_utils import create_empty_recarray @@ -94,7 +93,6 @@ def __init__( if package.parent.version == "mf2k": list_free_format = False self.list_free_format = list_free_format - return @property def name(self): @@ -307,41 +305,37 @@ def __cast_data(self, data): try: data = np.array(data) except Exception as e: - raise Exception( + raise ValueError( f"MfList error: casting list to ndarray: {e!s}" ) # If data is a dict, the we have to assume it is keyed on kper if isinstance(data, dict): if not list(data.keys()): - raise Exception("MfList error: data dict is empty") + raise ValueError("MfList error: data dict is empty") for kper, d in data.items(): try: kper = int(kper) except Exception as e: - raise Exception( + raise ValueError( f"MfList error: data dict key {kper} not integer: " f"{type(kper)}\n{e!s}" ) # Same as before, just try... if isinstance(d, list): - # warnings.warn("MfList: casting list to array at " +\ - # "kper {0:d}".format(kper)) try: d = np.array(d) except Exception as e: - raise Exception( + raise ValueError( f"MfList error: casting list to ndarray: {e}" ) - # super hack - sick of recarrays already - # if (isinstance(d,np.ndarray) and len(d.dtype.fields) > 1): - # d = d.view(np.recarray) - if isinstance(d, np.recarray): self.__cast_recarray(kper, d) elif isinstance(d, np.ndarray): self.__cast_ndarray(kper, d) + elif isinstance(d, pd.DataFrame): + self.__cast_dataframe(kper, d) elif isinstance(d, int): self.__cast_int(kper, d) elif isinstance(d, str): @@ -350,7 +344,7 @@ def __cast_data(self, data): self.__data[kper] = -1 self.__vtype[kper] = None else: - raise Exception( + raise ValueError( "MfList error: unsupported data type: " f"{type(d)} at kper {kper}" ) @@ -361,11 +355,14 @@ def __cast_data(self, data): # A single ndarray elif isinstance(data, np.ndarray): self.__cast_ndarray(0, data) + # A single dataframe + elif isinstance(data, pd.DataFrame): + self.__cast_dataframe(0, data) # A single filename elif isinstance(data, str): self.__cast_str(0, data) else: - raise Exception( + raise ValueError( f"MfList error: unsupported data type: {type(data)}" ) @@ -381,7 +378,7 @@ def __cast_str(self, kper, d): def __cast_int(self, kper, d): # If d is an integer, then it must be 0 or -1 if d > 0: - raise Exception( + raise ValueError( "MfList error: dict integer value for " "kper {:10d} must be 0 or -1, " "not {:10d}".format(kper, d) @@ -408,18 +405,19 @@ def __cast_ndarray(self, kper, d): f"MfList error: ndarray shape {d.shape} doesn't match " f"dtype len: {len(self.dtype)}" ) - # warnings.warn("MfList: ndarray dtype does not match self " +\ - # "dtype, trying to cast") try: self.__data[kper] = np.core.records.fromarrays( d.transpose(), dtype=self.dtype ) except Exception as e: - raise Exception( + raise ValueError( f"MfList error: casting ndarray to recarray: {e!s}" ) self.__vtype[kper] = np.recarray + def __cast_dataframe(self, kper, d): + self.__cast_recarray(kper, d.to_records(index=False)) + def get_dataframe(self, squeeze=False): """ Cast recarrays for stress periods into single @@ -534,7 +532,7 @@ def add_record(self, kper, index, values): try: self.__data[kper][-1] = tuple(rec) except Exception as e: - raise Exception( + raise ValueError( f"MfList.add_record() error: adding record to recarray: {e}" ) @@ -548,7 +546,7 @@ def __getitem__(self, kper): try: kper = int(kper) except Exception as e: - raise Exception( + raise ValueError( f"MfList error: _getitem__() passed invalid kper index: {kper}" ) if kper not in list(self.data.keys()): @@ -578,7 +576,7 @@ def __setitem__(self, kper, data): try: data = np.array(data) except Exception as e: - raise Exception( + raise ValueError( f"MfList error: casting list to ndarray: {e!s}" ) # cast data @@ -593,7 +591,7 @@ def __setitem__(self, kper, data): elif isinstance(data, str): self.__cast_str(kper, data) else: - raise Exception( + raise ValueError( f"MfList error: unsupported data type: {type(data)}" ) @@ -604,7 +602,7 @@ def __fromfile(self, f): try: d = np.genfromtxt(f, dtype=self.dtype) except Exception as e: - raise Exception( + raise ValueError( f"MfList.__fromfile() error reading recarray from file {e!s}" ) return d @@ -1097,8 +1095,9 @@ def to_array(self, kper=0, mask=False): for name, arr in arrays.items(): arrays[name][:] = np.NaN return arrays - else: - raise Exception("MfList: something bad happened") + raise ValueError( + f"MfList: expected no entries for period {kper} but found {sarr}" + ) for name, arr in arrays.items(): if unstructured: @@ -1225,7 +1224,7 @@ def masked4D_arrays_to_stress_period_data(dtype, m4ds): for i2, key2 in enumerate(keys[i1:]): a2 = np.isnan(m4ds[key2]) if not np.array_equal(a1, a2): - raise Exception( + raise ValueError( f"Transient2d error: masking not equal for {key1} and {key2}" ) From 121ca78f6073813cec082cfd01f483dad6838781 Mon Sep 17 00:00:00 2001 From: jdhughes-usgs Date: Wed, 22 Nov 2023 13:15:09 -0600 Subject: [PATCH 094/101] feat(mfsimlist): add functionality to parse memory_print_options (#2009) * add function to return memory summary information as a dictionary * add function to return full memory table information as a dictionary --- autotest/test_mfsimlist.py | 61 +++++- flopy/mf6/utils/mfsimlistfile.py | 313 ++++++++++++++++++++++++++++--- 2 files changed, 343 insertions(+), 31 deletions(-) diff --git a/autotest/test_mfsimlist.py b/autotest/test_mfsimlist.py index 5a4f1661ab..150d07204d 100644 --- a/autotest/test_mfsimlist.py +++ b/autotest/test_mfsimlist.py @@ -1,4 +1,5 @@ import numpy as np +import pandas as pd import pytest from autotest.conftest import get_example_data_path from modflow_devtools.markers import requires_exe @@ -6,11 +7,22 @@ import flopy from flopy.mf6 import MFSimulation +MEMORY_UNITS = ("gigabytes", "megabytes", "kilobytes", "bytes") + + +def base_model(sim_path, memory_print_option=None): + MEMORY_PRINT_OPTIONS = ("summary", "all") + if memory_print_option is not None: + if memory_print_option.lower() not in MEMORY_PRINT_OPTIONS: + raise ValueError( + f"invalid memory_print option ({memory_print_option.lower()})" + ) -def base_model(sim_path): load_path = get_example_data_path() / "mf6-freyberg" sim = MFSimulation.load(sim_ws=load_path) + if memory_print_option is not None: + sim.memory_print_option = memory_print_option sim.set_sim_path(sim_path) sim.write_simulation() sim.run_simulation() @@ -27,7 +39,7 @@ def test_mfsimlist_nofile(function_tmpdir): def test_mfsimlist_normal(function_tmpdir): sim = base_model(function_tmpdir) mfsimlst = flopy.mf6.utils.MfSimulationList(function_tmpdir / "mfsim.lst") - assert mfsimlst.is_normal_termination, "model did not terminate normally" + assert mfsimlst.normal_termination, "model did not terminate normally" @pytest.mark.xfail @@ -95,6 +107,13 @@ def test_mfsimlist_memory(function_tmpdir): f"total memory is not greater than 0.0 " + f"({total_memory})" ) + total_memory_kb = mfsimlst.get_memory_usage(units="kilobytes") + assert np.allclose(total_memory_kb, total_memory * 1e6), ( + f"total memory in kilobytes ({total_memory_kb}) is not equal to " + + f"the total memory converted to kilobytes " + + f"({total_memory * 1e6})" + ) + virtual_memory = mfsimlst.get_memory_usage(virtual=True) if not np.isnan(virtual_memory): assert virtual_memory == virtual_answer, ( @@ -107,3 +126,41 @@ def test_mfsimlist_memory(function_tmpdir): f"total memory ({total_memory}) " + f"does not equal non-virtual memory ({non_virtual_memory})" ) + + +@requires_exe("mf6") +@pytest.mark.parametrize("mem_option", (None, "summary")) +def test_mfsimlist_memory_summary(mem_option, function_tmpdir): + KEYS = ("TDIS", "FREYBERG", "SLN_1") + sim = base_model(function_tmpdir, memory_print_option=mem_option) + mfsimlst = flopy.mf6.utils.MfSimulationList(function_tmpdir / "mfsim.lst") + + if mem_option is None: + mem_dict = mfsimlst.get_memory_summary() + assert mem_dict is None, "Expected None to be returned" + else: + for units in MEMORY_UNITS: + mem_dict = mfsimlst.get_memory_summary(units=units) + for key in KEYS: + assert key in KEYS, f"memory summary key ({key}) not in KEYS" + + +@requires_exe("mf6") +@pytest.mark.parametrize("mem_option", (None, "all")) +def test_mfsimlist_memory_all(mem_option, function_tmpdir): + sim = base_model(function_tmpdir, memory_print_option=mem_option) + mfsimlst = flopy.mf6.utils.MfSimulationList(function_tmpdir / "mfsim.lst") + + if mem_option is None: + mem_dict = mfsimlst.get_memory_all() + assert mem_dict is None, "Expected None to be returned" + else: + for units in MEMORY_UNITS: + mem_dict = mfsimlst.get_memory_all(units=units) + total = 0.0 + for key, value in mem_dict.items(): + total += value["MEMORYSIZE"] + # total_ = mfsimlst.get_memory_usage(units=units) + # diff = total_ - total + # percent_diff = 100.0 * diff / total_ + assert total > 0.0, "memory is not greater than zero" diff --git a/flopy/mf6/utils/mfsimlistfile.py b/flopy/mf6/utils/mfsimlistfile.py index 713d244d80..c7270711b3 100644 --- a/flopy/mf6/utils/mfsimlistfile.py +++ b/flopy/mf6/utils/mfsimlistfile.py @@ -1,6 +1,7 @@ import os import pathlib as pl import re +import warnings import numpy as np @@ -15,10 +16,13 @@ def __init__(self, file_name: os.PathLike): self.file_name = file_name self.f = open(file_name, "r", encoding="ascii", errors="replace") + self.normal_termination = self._get_termination_message() + self.memory_print_option = self._memory_print_option() + @property def is_normal_termination(self) -> bool: """ - Determine if the simulation terminated normally + Return boolean indicating if the simulation terminated normally Returns ------- @@ -26,17 +30,7 @@ def is_normal_termination(self) -> bool: Boolean indicating if the simulation terminated normally """ - # rewind the file - self._rewind_file() - - seekpoint = self._seek_to_string("Normal termination of simulation.") - self.f.seek(seekpoint) - line = self.f.readline() - if line == "": - success = False - else: - success = True - return success + return self.normal_termination def get_runtime( self, units: str = "seconds", simulation_timer: str = "elapsed" @@ -104,11 +98,13 @@ def get_runtime( if line == "": return np.nan - # yank out the floating point values from the Elapsed run time string + # parse floating point values from the Elapsed run time string times = list(map(float, re.findall(r"[+-]?[0-9.]+", line))) + # pad an array with zeros and times with # [days, hours, minutes, seconds] times = np.array([0 for _ in range(4 - len(times))] + times) + # convert all to seconds time2sec = np.array([24 * 60 * 60, 60 * 60, 60, 1]) times_sec = np.sum(times * time2sec) @@ -119,7 +115,8 @@ def get_runtime( if line == "": return np.nan times_sec = float(line.split()[3]) - # return in the requested units + + # return time in the requested units if units == "seconds": return times_sec elif units == "minutes": @@ -185,7 +182,11 @@ def get_total_iterations(self) -> int: return total_iterations - def get_memory_usage(self, virtual=False) -> float: + def get_memory_usage( + self, + virtual: bool = False, + units: str = "gigabytes", + ) -> float: """ Get the simulation memory usage from the simulation list file. @@ -193,6 +194,9 @@ def get_memory_usage(self, virtual=False) -> float: ---------- virtual : bool Return total or virtual memory usage (default is total) + units : str + Memory units for return results. Valid values are 'gigabytes', + 'megabytes', 'kilobytes', and 'bytes' (default is 'gigabytes'). Returns ------- @@ -200,7 +204,7 @@ def get_memory_usage(self, virtual=False) -> float: Total memory usage for a simulation (in Gigabytes) """ - # initialize total_iterations + # initialize memory_usage memory_usage = 0.0 # rewind the file @@ -214,21 +218,16 @@ def get_memory_usage(self, virtual=False) -> float: while True: seekpoint = self._seek_to_string(tags[0]) + self.f.seek(seekpoint) - line = self.f.readline() + line = self.f.readline().strip() if line == "": break - units = line.split()[-1] - if units == "GIGABYTES": - conversion = 1.0 - elif units == "MEGABYTES": - conversion = 1e-3 - elif units == "KILOBYTES": - conversion = 1e-6 - elif units == "BYTES": - conversion = 1e-9 - else: - raise ValueError(f"Unknown memory unit '{units}'") + sim_units = line.split()[-1] + unit_conversion = self._get_memory_unit_conversion( + sim_units, + return_units_str=units.upper(), + ) if virtual: tag = tags[2] @@ -239,7 +238,7 @@ def get_memory_usage(self, virtual=False) -> float: line = self.f.readline() if line == "": break - memory_usage = float(line.split()[-1]) * conversion + memory_usage = float(line.split()[-1]) * unit_conversion return memory_usage @@ -255,6 +254,170 @@ def get_non_virtual_memory_usage(self): """ return self.get_memory_usage() - self.get_memory_usage(virtual=True) + def get_memory_summary(self, units: str = "gigabytes") -> dict: + """ + Get the summary memory information if it is available in the + simulation list file. Summary memory information is only available + if the memory_print_option is set to 'summary' in the simulation + name file options block. + + Parameters + ---------- + units : str + Memory units for return results. Valid values are 'gigabytes', + 'megabytes', 'kilobytes', and 'bytes' (default is 'gigabytes'). + + Returns + ------- + memory_summary : dict + dictionary with the total memory for each simulation component. + None is returned if summary memory data is not present in the + simulation listing file. + + + """ + # initialize the return variable + memory_summary = None + + if self.memory_print_option != "summary": + msg = ( + "Cannot retrieve memory data using get_memory_summary() " + + "since memory_print_option is not set to 'SUMMARY'. " + + "Returning None." + ) + warnings.warn(msg, category=Warning) + + else: + # rewind the file + self._rewind_file() + + seekpoint = self._seek_to_string( + "SUMMARY INFORMATION ON VARIABLES " + + "STORED IN THE MEMORY MANAGER" + ) + self.f.seek(seekpoint) + line = self.f.readline().strip() + + if line != "": + sim_units = line.split()[-1] + unit_conversion = self._get_memory_unit_conversion( + sim_units, + return_units_str=units.upper(), + ) + # read the header + for k in range(3): + _ = self.f.readline() + terminator = 100 * "-" + memory_summary = {} + while True: + line = self.f.readline().strip() + if line == terminator: + break + data = line.split() + memory_summary[data[0]] = float(data[-1]) * unit_conversion + + return memory_summary + + def get_memory_all(self, units: str = "gigabytes") -> dict: + """ + Get a dictionary of the memory table written if it is available in the + simulation list file. The memory table is only available + if the memory_print_option is set to 'all' in the simulation + name file options block. + + Parameters + ---------- + units : str + Memory units for return results. Valid values are 'gigabytes', + 'megabytes', 'kilobytes', and 'bytes' (default is 'gigabytes'). + + Returns + ------- + memory_all : dict + dictionary with the memory information for each variable in the + MODFLOW 6 memory manager. The dictionary keys are the full memory + path for a variable (the memory path and variable name). The + dictionary entry for each key includes the memory path, the + variable name, data type, size, and memory used for each variable. + None is returned if the memory table is not present in the + simulation listing file. + + + """ + # initialize the return variable + memory_all = None + + TYPE_SIZE = { + "INTEGER": 4.0, + "DOUBLE": 8.0, + "LOGICAL": 4.0, + "STRING": 1.0, + } + if self.memory_print_option != "all": + msg = ( + "Cannot retrieve memory data using get_memory_all() since " + + "memory_print_option is not set to 'ALL'. Returning None." + ) + warnings.warn(msg, category=Warning) + else: + # rewind the file + self._rewind_file() + + seekpoint = self._seek_to_string( + "DETAILED INFORMATION ON VARIABLES " + + "STORED IN THE MEMORY MANAGER" + ) + self.f.seek(seekpoint) + line = self.f.readline().strip() + + if line != "": + sim_units = "BYTES" + unit_conversion = self._get_memory_unit_conversion( + sim_units, + return_units_str=units.upper(), + ) + # read the header + for k in range(3): + _ = self.f.readline() + terminator = 173 * "-" + memory_all = {} + # read the data + while True: + line = self.f.readline().strip() + if line == terminator: + break + if "STRING LEN=" in line: + mempath = line[0:50].strip() + varname = line[51:67].strip() + data_type = line[68:84].strip() + no_items = float(line[84:105].strip()) + assoc_var = line[106:].strip() + variable_bytes = ( + TYPE_SIZE["STRING"] + * float(data_type.replace("STRING LEN=", "")) + * no_items + ) + else: + data = line.split() + mempath = data[0] + varname = data[1] + data_type = data[2] + no_items = float(data[3]) + assoc_var = data[4] + variable_bytes = TYPE_SIZE[data_type] * no_items + + if assoc_var == "--": + size_bytes = variable_bytes * unit_conversion + memory_all[f"{mempath}/{varname}"] = { + "MEMPATH": mempath, + "VARIABLE": varname, + "DATATYPE": data_type, + "SIZE": no_items, + "MEMORYSIZE": size_bytes, + } + + return memory_all + def _seek_to_string(self, s): """ Parameters @@ -287,3 +450,95 @@ def _rewind_file(self): """ self.f.seek(0) + + def _get_termination_message(self) -> bool: + """ + Determine if the simulation terminated normally + + Returns + ------- + success: bool + Boolean indicating if the simulation terminated normally + + """ + # rewind the file + self._rewind_file() + + seekpoint = self._seek_to_string("Normal termination of simulation.") + self.f.seek(seekpoint) + line = self.f.readline().strip() + if line == "": + success = False + else: + success = True + return success + + def _get_memory_unit_conversion( + self, + sim_units_str: str, + return_units_str, + ) -> float: + """ + Calculate memory unit conversion factor that converts from reported + units to gigabytes + + Parameters + ---------- + sim_units_str : str + Memory Units in the simulation listing file. Valid values are + 'GIGABYTES', 'MEGABYTES', 'KILOBYTES', or 'BYTES'. + + Returns + ------- + unit_conversion : float + Unit conversion factor + + """ + valid_units = ( + "GIGABYTES", + "MEGABYTES", + "KILOBYTES", + "BYTES", + ) + if sim_units_str not in valid_units: + raise ValueError(f"Unknown memory unit '{sim_units_str}'") + + factor = [1.0, 1e-3, 1e-6, 1e-9] + if return_units_str == "MEGABYTES": + factor = [v * 1e3 for v in factor] + elif return_units_str == "KILOBYTES": + factor = [v * 1e6 for v in factor] + elif return_units_str == "BYTES": + factor = [v * 1e9 for v in factor] + factor_dict = {tag: factor[idx] for idx, tag in enumerate(valid_units)} + + return factor_dict[sim_units_str] + + def _memory_print_option(self) -> str: + """ + Determine the memory print option selected + + Returns + ------- + option: str + memory_print_option ('summary', 'all', or None) + + """ + # rewind the file + self._rewind_file() + + seekpoint = self._seek_to_string("MEMORY_PRINT_OPTION SET TO") + self.f.seek(seekpoint) + line = self.f.readline().strip() + if line == "": + option = None + else: + option_list = re.findall(r'"([^"]*)"', line) + if len(option_list) < 1: + raise LookupError( + "could not parse memory_print_option from" + f"'{line}'" + ) + option = option_list[-1].lower() + if option not in ("all", "summary"): + raise ValueError(f"unknown memory print option {option}") + return option From 0ffe85cdeeee3d545aa49672753c01714dbeb0d5 Mon Sep 17 00:00:00 2001 From: wpbonelli Date: Wed, 22 Nov 2023 16:44:45 -0500 Subject: [PATCH 095/101] docs(datafiles): clarify 0-based vs 1-based indexing (#2000) * clarify that get_kstpkper() is 0-based, kstpkper is 1-based * fix get_data() usage examples to use 0-based indices * clarify warnings and error messages --- flopy/utils/binaryfile.py | 55 +++++++++++++++++------------------- flopy/utils/datafile.py | 34 ++++++++-------------- flopy/utils/formattedfile.py | 7 ++--- 3 files changed, 40 insertions(+), 56 deletions(-) diff --git a/flopy/utils/binaryfile.py b/flopy/utils/binaryfile.py index cf384fa359..8ac1416ae4 100644 --- a/flopy/utils/binaryfile.py +++ b/flopy/utils/binaryfile.py @@ -458,10 +458,12 @@ def _build_index(self): if self.nrow < 0 or self.ncol < 0: raise Exception("negative nrow, ncol") - if self.nrow > 1 and self.nrow * self.ncol > 10000000: - s = "Possible error. ncol ({}) * nrow ({}) > 10,000,000 " - s = s.format(self.ncol, self.nrow) - warnings.warn(s) + + warn_threshold = 10000000 + if self.nrow > 1 and self.nrow * self.ncol > warn_threshold: + warnings.warn( + f"Very large grid, ncol ({self.ncol}) * nrow ({self.nrow}) > {warn_threshold}" + ) self.file.seek(0, 2) self.totalbytes = self.file.tell() self.file.seek(0, 0) @@ -473,14 +475,12 @@ def _build_index(self): continue if ipos == 0: self.times.append(header["totim"]) - kstpkper = (header["kstp"], header["kper"]) - self.kstpkper.append(kstpkper) + self.kstpkper.append((header["kstp"], header["kper"])) else: totim = header["totim"] if totim != self.times[-1]: self.times.append(totim) - kstpkper = (header["kstp"], header["kper"]) - self.kstpkper.append(kstpkper) + self.kstpkper.append((header["kstp"], header["kper"])) ipos = self.file.tell() self.iposarray.append(ipos) databytes = self.get_databytes(header) @@ -609,7 +609,7 @@ class HeadFile(BinaryLayerFile): >>> import flopy.utils.binaryfile as bf >>> hdobj = bf.HeadFile('model.hds', precision='single') >>> hdobj.list_records() - >>> rec = hdobj.get_data(kstpkper=(1, 50)) + >>> rec = hdobj.get_data(kstpkper=(0, 49)) >>> ddnobj = bf.HeadFile('model.ddn', text='drawdown', precision='single') >>> ddnobj.list_records() @@ -762,7 +762,7 @@ class UcnFile(BinaryLayerFile): >>> import flopy.utils.binaryfile as bf >>> ucnobj = bf.UcnFile('MT3D001.UCN', precision='single') >>> ucnobj.list_records() - >>> rec = ucnobj.get_data(kstpkper=(1,1)) + >>> rec = ucnobj.get_data(kstpkper=(0, 0)) """ @@ -829,7 +829,7 @@ class HeadUFile(BinaryLayerFile): >>> import flopy.utils.binaryfile as bf >>> hdobj = bf.HeadUFile('model.hds') >>> hdobj.list_records() - >>> usgheads = hdobj.get_data(kstpkper=(1, 50)) + >>> usgheads = hdobj.get_data(kstpkper=(0, 49)) """ @@ -1505,19 +1505,16 @@ def _unique_package_names(self): def get_kstpkper(self): """ - Get a list of unique stress periods and time steps in the file + Get a list of unique tuples (stress period, time step) in the file. + Indices are 0-based, use the `kstpkper` attribute for 1-based. Returns - ---------- - out : list of (kstp, kper) tuples - List of unique kstp, kper combinations in binary file. kstp and - kper values are zero-based. - + ------- + list of (kstp, kper) tuples + List of unique combinations of stress period & + time step indices (0-based) in the binary file """ - kstpkper = [] - for kstp, kper in self.kstpkper: - kstpkper.append((kstp - 1, kper - 1)) - return kstpkper + return [(kstp - 1, kper - 1) for kstp, kper in self.kstpkper] def get_indices(self, text=None): """ @@ -1764,25 +1761,25 @@ def get_ts(self, idx, text=None, times=None): # Initialize result array and put times in first column result = self._init_result(nstation) - kk = self.get_kstpkper() timesint = self.get_times() + kstpkper = self.get_kstpkper() + nsteps = len(kstpkper) if len(timesint) < 1: if times is None: - timesint = [x + 1 for x in range(len(kk))] + timesint = [x + 1 for x in range(nsteps)] else: if isinstance(times, np.ndarray): times = times.tolist() - if len(times) != len(kk): - raise Exception( - "times passed to CellBudgetFile get_ts() " - "method must be equal to {} " - "not {}".format(len(kk), len(times)) + if len(times) != nsteps: + raise ValueError( + f"number of times provided ({len(times)}) must equal " + f"number of time steps in cell budget file ({nsteps})" ) timesint = times for idx, t in enumerate(timesint): result[idx, 0] = t - for itim, k in enumerate(kk): + for itim, k in enumerate(kstpkper): try: v = self.get_data(kstpkper=k, text=text, full3D=True) # skip missing data - required for storage diff --git a/flopy/utils/datafile.py b/flopy/utils/datafile.py index 81b84901cc..5c5d3dc811 100644 --- a/flopy/utils/datafile.py +++ b/flopy/utils/datafile.py @@ -149,9 +149,8 @@ def get_values(self): class LayerFile: """ - The LayerFile class is the abstract base class from which specific derived - classes are formed. LayerFile This class should not be instantiated - directly. + Base class for layered output files. + Do not instantiate directly. """ @@ -477,19 +476,16 @@ def get_times(self): def get_kstpkper(self): """ - Get a list of unique stress periods and time steps in the file + Get a list of unique tuples (stress period, time step) in the file. + Indices are 0-based, use the `kstpkper` attribute for 1-based. Returns - ---------- - out : list of (kstp, kper) tuples - List of unique kstp, kper combinations in binary file. kstp and - kper values are presently zero-based. - + ------- + list of (kstp, kper) tuples + List of unique combinations of stress period & + time step indices (0-based) in the binary file """ - kstpkper = [] - for kstp, kper in self.kstpkper: - kstpkper.append((kstp - 1, kper - 1)) - return kstpkper + return [(kstp - 1, kper - 1) for kstp, kper in self.kstpkper] def get_data(self, kstpkper=None, idx=None, totim=None, mflay=None): """ @@ -498,10 +494,9 @@ def get_data(self, kstpkper=None, idx=None, totim=None, mflay=None): Parameters ---------- idx : int - The zero-based record number. The first record is record 0. + The zero-based record number. The first record is record 0. kstpkper : tuple of ints - A tuple containing the time step and stress period (kstp, kper). - These are zero-based kstp and kper values. + A tuple (kstep, kper) of zero-based time step and stress period. totim : float The simulation time. mflay : integer @@ -514,14 +509,9 @@ def get_data(self, kstpkper=None, idx=None, totim=None, mflay=None): Array has size (nlay, nrow, ncol) if mflay is None or it has size (nrow, ncol) if mlay is specified. - See Also - -------- - Notes ----- - if both kstpkper and totim are None, will return the last entry - Examples - -------- + If both kstpkper and totim are None, the last entry will be returned. """ # One-based kstp and kper for pulling out of recarray diff --git a/flopy/utils/formattedfile.py b/flopy/utils/formattedfile.py index db53aa9cfa..dd4c795bae 100644 --- a/flopy/utils/formattedfile.py +++ b/flopy/utils/formattedfile.py @@ -117,7 +117,7 @@ def _build_index(self): Build the recordarray and iposarray, which maps the header information to the position in the formatted file. """ - self.kstpkper # array of time step/stress periods with data available + self.kstpkper # 1-based array of time step/stress periods self.recordarray # array of data headers self.iposarray # array of seek positions for each record self.nlay # Number of model layers @@ -155,7 +155,6 @@ def _build_index(self): self.recordarray = np.array(self.recordarray, self.header.get_dtype()) self.iposarray = np.array(self.iposarray) self.nlay = np.max(self.recordarray["ilay"]) - return def _store_record(self, header, ipos): """ @@ -356,10 +355,9 @@ class FormattedHeadFile(FormattedLayerFile): >>> import flopy.utils.formattedfile as ff >>> hdobj = ff.FormattedHeadFile('model.fhd', precision='single') >>> hdobj.list_records() - >>> rec = hdobj.get_data(kstpkper=(1, 50)) + >>> rec = hdobj.get_data(kstpkper=(0, 49)) >>> rec2 = ddnobj.get_data(totim=100.) - """ def __init__( @@ -372,7 +370,6 @@ def __init__( ): self.text = text super().__init__(filename, precision, verbose, kwargs) - return def _get_text_header(self): """ From 09b59ce8c0e33b32727e0f07e44efadf166ffba2 Mon Sep 17 00:00:00 2001 From: wpbonelli Date: Wed, 22 Nov 2023 16:45:32 -0500 Subject: [PATCH 096/101] refactor(pakbase): standardize ipakcb docstrings/defaults (#2001) * make ipakcb docstrings consistent for all modflow (non-mf6) packages * make default ipakcb consistent (None for ipakcb=0 and no output file) * add default_cbc_unit method to Package, support ipakcb=default for 53 --- .../modpath7_structured_transient_example.py | 6 ++-- autotest/regression/test_mf6_pandas.py | 1 - autotest/test_mbase.py | 1 - flopy/mfusg/mfusgbcf.py | 11 +++---- flopy/mfusg/mfusglpf.py | 11 +++---- flopy/mfusg/mfusgwel.py | 17 +++------- flopy/modflow/mfbcf.py | 21 ++++-------- flopy/modflow/mfdrn.py | 22 ++++--------- flopy/modflow/mfdrt.py | 23 ++++--------- flopy/modflow/mfevt.py | 10 ++---- flopy/modflow/mffhb.py | 21 ++++-------- flopy/modflow/mfghb.py | 22 ++++--------- flopy/modflow/mflak.py | 24 ++++---------- flopy/modflow/mflpf.py | 21 ++++-------- flopy/modflow/mfmnw1.py | 17 +++------- flopy/modflow/mfmnw2.py | 32 +++++-------------- flopy/modflow/mfrch.py | 21 ++++-------- flopy/modflow/mfriv.py | 21 ++++-------- flopy/modflow/mfsfr2.py | 23 ++++--------- flopy/modflow/mfstr.py | 21 ++++-------- flopy/modflow/mfsub.py | 17 +++------- flopy/modflow/mfswi2.py | 18 +++-------- flopy/modflow/mfswt.py | 22 ++++--------- flopy/modflow/mfupw.py | 22 ++++--------- flopy/modflow/mfuzf1.py | 24 ++++---------- flopy/modflow/mfwel.py | 22 ++++--------- flopy/pakbase.py | 9 ++++++ 27 files changed, 154 insertions(+), 326 deletions(-) diff --git a/.docs/Notebooks/modpath7_structured_transient_example.py b/.docs/Notebooks/modpath7_structured_transient_example.py index fb76ebff6b..0e4c22b033 100644 --- a/.docs/Notebooks/modpath7_structured_transient_example.py +++ b/.docs/Notebooks/modpath7_structured_transient_example.py @@ -238,7 +238,9 @@ def no_flow(w): [drain[0], drain[1], i + drain[2][0], 322.5, 100000.0, 6] for i in range(drain[2][1] - drain[2][0]) ] -drn = flopy.mf6.modflow.mfgwfdrn.ModflowGwfdrn(gwf, auxiliary=["IFACE"], stress_period_data={0: dd}) +drn = flopy.mf6.modflow.mfgwfdrn.ModflowGwfdrn( + gwf, auxiliary=["IFACE"], stress_period_data={0: dd} +) # output control headfile = f"{sim_name}.hds" @@ -410,5 +412,3 @@ def add_legend(ax): temp_dir.cleanup() except: pass - - diff --git a/autotest/regression/test_mf6_pandas.py b/autotest/regression/test_mf6_pandas.py index c8123396b5..183ed2ff0f 100644 --- a/autotest/regression/test_mf6_pandas.py +++ b/autotest/regression/test_mf6_pandas.py @@ -48,7 +48,6 @@ ) from flopy.mf6.data.mfdataplist import MFPandasList - pytestmark = pytest.mark.mf6 diff --git a/autotest/test_mbase.py b/autotest/test_mbase.py index 8a2eddcadc..8e3ee5d828 100644 --- a/autotest/test_mbase.py +++ b/autotest/test_mbase.py @@ -10,7 +10,6 @@ from flopy.mbase import resolve_exe from flopy.utils.flopy_io import relpath_safe - _system = system() diff --git a/flopy/mfusg/mfusgbcf.py b/flopy/mfusg/mfusgbcf.py index 6bda77d7cb..fcd99d4729 100644 --- a/flopy/mfusg/mfusgbcf.py +++ b/flopy/mfusg/mfusgbcf.py @@ -26,10 +26,9 @@ class MfUsgBcf(ModflowBcf): model : model object The model object (of type :class:`flopy.modflow.Modflow`) to which this package will be added. - ipakcb : int - A flag that is used to determine if cell-by-cell budget data should be - saved. If ipakcb is non-zero cell-by-cell budget data will be saved. - (default is 53) + ipakcb : int, optional + Toggles whether cell-by-cell budget data should be saved. If None or zero, + budget data will not be saved (default is None). intercellt : int Intercell transmissivities, harmonic mean (0), arithmetic mean (1), logarithmic mean (2), combination (3). (default is 0) @@ -96,9 +95,9 @@ class MfUsgBcf(ModflowBcf): filenames=None the package name will be created using the model name and package extension and the cbc output name will be created using the model name and .cbc extension (for example, modflowtest.cbc), - if ipakcbc is a number greater than zero. If a single string is passed + if ipakcb is a number greater than zero. If a single string is passed the package will be set to the string and cbc output name will be - created using the model name and .cbc extension, if ipakcbc is a + created using the model name and .cbc extension, if ipakcb is a number greater than zero. To define the names for all package files (input and output) the length of the list of strings should be 2. Default is None. diff --git a/flopy/mfusg/mfusglpf.py b/flopy/mfusg/mfusglpf.py index 52d0dcf259..42476656d1 100644 --- a/flopy/mfusg/mfusglpf.py +++ b/flopy/mfusg/mfusglpf.py @@ -30,10 +30,9 @@ class MfUsgLpf(ModflowLpf): model : model object The model object (of type :class:`flopy.modflowusg.mfusg.MfUsg`) to which this package will be added. - ipakcb : int - A flag that is used to determine if cell-by-cell budget data should be - saved. If ipakcb is non-zero cell-by-cell budget data will be saved. - (default is 0) + ipakcb : int, optional + Toggles whether cell-by-cell budget data should be saved. If None or zero, + budget data will not be saved (default is None). hdry : float Is the head that is assigned to cells that are converted to dry during a simulation. Although this value plays no role in the model @@ -169,9 +168,9 @@ class MfUsgLpf(ModflowLpf): filenames=None the package name will be created using the model name and package extension and the cbc output name will be created using the model name and .cbc extension (for example, modflowtest.cbc), - if ipakcbc is a number greater than zero. If a single string is passed + if ipakcb is a number greater than zero. If a single string is passed the package will be set to the string and cbc output name will be - created using the model name and .cbc extension, if ipakcbc is a + created using the model name and .cbc extension, if ipakcb is a number greater than zero. To define the names for all package files (input and output) the length of the list of strings should be 2. Default is None. diff --git a/flopy/mfusg/mfusgwel.py b/flopy/mfusg/mfusgwel.py index 1735fd8670..47ecc26d49 100644 --- a/flopy/mfusg/mfusgwel.py +++ b/flopy/mfusg/mfusgwel.py @@ -8,16 +8,10 @@ MODFLOW Guide `_. """ -from copy import deepcopy -import numpy as np -from numpy.lib.recfunctions import stack_arrays -from ..modflow.mfparbc import ModflowParBc as mfparbc from ..modflow.mfwel import ModflowWel from ..utils import MfList -from ..utils.flopy_io import ulstrd -from ..utils.utils_def import get_open_file_object from .mfusg import MfUsg @@ -29,10 +23,9 @@ class MfUsgWel(ModflowWel): model : model object The model object (of type :class:`flopy.modflow.mf.Modflow`) to which this package will be added. - ipakcb : int - A flag that is used to determine if cell-by-cell budget data should be - saved. If ipakcb is non-zero cell-by-cell budget data will be saved. - (default is 0). + ipakcb : int, optional + Toggles whether cell-by-cell budget data should be saved. If None or zero, + budget data will not be saved (default is None). stress_period_data : list of boundaries, or recarray of boundaries, or dictionary of boundaries For structured grid, each well is defined through definition of @@ -127,9 +120,9 @@ class MfUsgWel(ModflowWel): filenames=None the package name will be created using the model name and package extension and the cbc output name will be created using the model name and .cbc extension (for example, modflowtest.cbc), - if ipakcbc is a number greater than zero. If a single string is passed + if ipakcb is a number greater than zero. If a single string is passed the package will be set to the string and cbc output names will be - created using the model name and .cbc extension, if ipakcbc is a + created using the model name and .cbc extension, if ipakcb is a number greater than zero. To define the names for all package files (input and output) the length of the list of strings should be 2. Default is None. diff --git a/flopy/modflow/mfbcf.py b/flopy/modflow/mfbcf.py index 5c7e10540a..a7aed46a83 100644 --- a/flopy/modflow/mfbcf.py +++ b/flopy/modflow/mfbcf.py @@ -14,10 +14,9 @@ class ModflowBcf(Package): model : model object The model object (of type :class:`flopy.modflow.Modflow`) to which this package will be added. - ipakcb : int - A flag that is used to determine if cell-by-cell budget data should be - saved. If ipakcb is non-zero cell-by-cell budget data will be saved. - (default is 53) + ipakcb : int, optional + Toggles whether cell-by-cell budget data should be saved. If None or zero, + budget data will not be saved (default is None). intercellt : int Intercell transmissivities, harmonic mean (0), arithmetic mean (1), logarithmic mean (2), combination (3). (default is 0) @@ -64,9 +63,9 @@ class ModflowBcf(Package): filenames=None the package name will be created using the model name and package extension and the cbc output name will be created using the model name and .cbc extension (for example, modflowtest.cbc), - if ipakcbc is a number greater than zero. If a single string is passed + if ipakcb is a number greater than zero. If a single string is passed the package will be set to the string and cbc output name will be - created using the model name and .cbc extension, if ipakcbc is a + created using the model name and .cbc extension, if ipakcb is a number greater than zero. To define the names for all package files (input and output) the length of the list of strings should be 2. Default is None. @@ -121,13 +120,8 @@ def __init__( # set filenames filenames = self._prepare_filenames(filenames, 2) - # update external file information with cbc output, if necessary - if ipakcb is not None: - model.add_output_file( - ipakcb, fname=filenames[1], package=self._ftype() - ) - else: - ipakcb = 0 + # cbc output file + self.set_cbc_output_file(ipakcb, model, filenames[1]) # call base package constructor super().__init__( @@ -168,7 +162,6 @@ def __init__( ) # item 1 - self.ipakcb = ipakcb self.hdry = hdry self.iwdflg = iwdflg self.wetfct = wetfct diff --git a/flopy/modflow/mfdrn.py b/flopy/modflow/mfdrn.py index c4614f5aa7..9c3a64f432 100644 --- a/flopy/modflow/mfdrn.py +++ b/flopy/modflow/mfdrn.py @@ -23,10 +23,9 @@ class ModflowDrn(Package): model : model object The model object (of type :class:`flopy.modflow.mf.Modflow`) to which this package will be added. - ipakcb : int - A flag that is used to determine if cell-by-cell budget data should be - saved. If ipakcb is non-zero cell-by-cell budget data will be saved. - (default is None). + ipakcb : int, optional + Toggles whether cell-by-cell budget data should be saved. If None or zero, + budget data will not be saved (default is None). stress_period_data : list, recarray, dataframe or dictionary of boundaries. Each drain cell is defined through definition of layer(int), row(int), column(int), elevation(float), @@ -73,9 +72,9 @@ class ModflowDrn(Package): filenames=None the package name will be created using the model name and package extension and the cbc output name will be created using the model name and .cbc extension (for example, modflowtest.cbc), - if ipakcbc is a number greater than zero. If a single string is passed + if ipakcb is a number greater than zero. If a single string is passed the package will be set to the string and cbc output names will be - created using the model name and .cbc extension, if ipakcbc is a + created using the model name and .cbc extension, if ipakcb is a number greater than zero. To define the names for all package files (input and output) the length of the list of strings should be 2. Default is None. @@ -125,13 +124,8 @@ def __init__( # set filenames filenames = self._prepare_filenames(filenames, 2) - # update external file information with cbc output, if necessary - if ipakcb is not None: - model.add_output_file( - ipakcb, fname=filenames[1], package=self._ftype() - ) - else: - ipakcb = 0 + # cbc output file + self.set_cbc_output_file(ipakcb, model, filenames[1]) if options is None: options = [] @@ -157,8 +151,6 @@ def __init__( self._generate_heading() self.url = "drn.html" - self.ipakcb = ipakcb - self.np = 0 self.options = options diff --git a/flopy/modflow/mfdrt.py b/flopy/modflow/mfdrt.py index 781ddcd3dd..425f6d873d 100644 --- a/flopy/modflow/mfdrt.py +++ b/flopy/modflow/mfdrt.py @@ -23,10 +23,9 @@ class ModflowDrt(Package): model : model object The model object (of type :class:`flopy.modflow.mf.Modflow`) to which this package will be added. - ipakcb : int - A flag that is used to determine if cell-by-cell budget data should be - saved. If ipakcb is non-zero cell-by-cell budget data will be saved. - (default is None). + ipakcb : int, optional + Toggles whether cell-by-cell budget data should be saved. If None or zero, + budget data will not be saved (default is None). stress_period_data : list, recarray, dataframe or dictionary of boundaries. Each drain return cell is defined through definition of layer(int), row(int), column(int), elevation(float), @@ -73,9 +72,9 @@ class ModflowDrt(Package): filenames=None the package name will be created using the model name and package extension and the cbc output name will be created using the model name and .cbc extension (for example, modflowtest.cbc), - if ipakcbc is a number greater than zero. If a single string is passed + if ipakcb is a number greater than zero. If a single string is passed the package will be set to the string and cbc output names will be - created using the model name and .cbc extension, if ipakcbc is a + created using the model name and .cbc extension, if ipakcb is a number greater than zero. To define the names for all package files (input and output) the length of the list of strings should be 2. Default is None. @@ -123,13 +122,8 @@ def __init__( # set filenames filenames = self._prepare_filenames(filenames, 2) - # update external file information with cbc output, if necessary - if ipakcb is not None: - model.add_output_file( - ipakcb, fname=filenames[1], package=self._ftype() - ) - else: - ipakcb = 0 + # cbc output file + self.set_cbc_output_file(ipakcb, model, filenames[1]) if options is None: options = [] @@ -152,9 +146,6 @@ def __init__( self._generate_heading() self.url = "drt.html" - - self.ipakcb = ipakcb - self.np = 0 self.options = options diff --git a/flopy/modflow/mfevt.py b/flopy/modflow/mfevt.py index e15cc9bb73..3cd1ad6855 100644 --- a/flopy/modflow/mfevt.py +++ b/flopy/modflow/mfevt.py @@ -104,13 +104,8 @@ def __init__( # set filenames filenames = self._prepare_filenames(filenames, 2) - # update external file information with cbc output, if necessary - if ipakcb is not None: - model.add_output_file( - ipakcb, fname=filenames[1], package=self._ftype() - ) - else: - ipakcb = 0 + # cbc output file + self.set_cbc_output_file(ipakcb, model, filenames[1]) # call base package constructor super().__init__( @@ -125,7 +120,6 @@ def __init__( self._generate_heading() self.url = "evt.html" self.nevtop = nevtop - self.ipakcb = ipakcb self.external = external if self.external is False: load = True diff --git a/flopy/modflow/mffhb.py b/flopy/modflow/mffhb.py index 64764078e3..9ea0c88f44 100644 --- a/flopy/modflow/mffhb.py +++ b/flopy/modflow/mffhb.py @@ -40,10 +40,9 @@ class ModflowFhb(Package): If the simulation includes only steady-state stress periods, the flag controls how flow, head, and auxiliary-variable values will be computed for each steady-state solution. (default is 0) - ipakcb : int - A flag that is used to determine if cell-by-cell budget data should be - saved. If ipakcb is non-zero cell-by-cell budget data will be saved. - (default is None). + ipakcb : int, optional + Toggles whether cell-by-cell budget data should be saved. If None or zero, + budget data will not be saved (default is None). nfhbx1 : int Number of auxiliary variables whose values will be computed for each time step for each specified-flow cell. Auxiliary variables are @@ -106,9 +105,9 @@ class ModflowFhb(Package): filenames=None the package name will be created using the model name and package extension and the cbc output name will be created using the model name and .cbc extension (for example, modflowtest.cbc), - if ipakcbc is a number greater than zero. If a single string is passed + if ipakcb is a number greater than zero. If a single string is passed the package will be set to the string and cbc output names will be - created using the model name and .cbc extension, if ipakcbc is a + created using the model name and .cbc extension, if ipakcb is a number greater than zero. To define the names for all package files (input and output) the length of the list of strings should be 2. Default is None. @@ -163,13 +162,8 @@ def __init__( # set filenames filenames = self._prepare_filenames(filenames, 2) - # update external file information with cbc output, if necessary - if ipakcb is not None: - model.add_output_file( - ipakcb, fname=filenames[1], package=self._ftype() - ) - else: - ipakcb = 0 + # cbc output file + self.set_cbc_output_file(ipakcb, model, filenames[1]) # call base package constructor super().__init__( @@ -187,7 +181,6 @@ def __init__( self.nflw = nflw self.nhed = nhed self.ifhbss = ifhbss - self.ipakcb = ipakcb if nfhbx1 != 0: nfhbx1 = 0 self.nfhbx1 = nfhbx1 diff --git a/flopy/modflow/mfghb.py b/flopy/modflow/mfghb.py index 903c88c646..fb9ed02116 100644 --- a/flopy/modflow/mfghb.py +++ b/flopy/modflow/mfghb.py @@ -23,12 +23,10 @@ class ModflowGhb(Package): model : model object The model object (of type :class:`flopy.modflow.mf.Modflow`) to which this package will be added. - ipakcb : int - A flag that is used to determine if cell-by-cell budget data should be - saved. If ipakcb is non-zero cell-by-cell budget data will be saved. - (default is 0). + ipakcb : int, optional + Toggles whether cell-by-cell budget data should be saved. If None or zero, + budget data will not be saved (default is None). stress_period_data : list, recarray, dataframe or dictionary of boundaries. - Each ghb cell is defined through definition of layer(int), row(int), column(int), stage(float), conductance(float) The simplest form is a dictionary with a lists of boundaries for each @@ -73,9 +71,9 @@ class ModflowGhb(Package): filenames=None the package name will be created using the model name and package extension and the cbc output name will be created using the model name and .cbc extension (for example, modflowtest.cbc), - if ipakcbc is a number greater than zero. If a single string is passed + if ipakcb is a number greater than zero. If a single string is passed the package will be set to the string and cbc output names will be - created using the model name and .cbc extension, if ipakcbc is a + created using the model name and .cbc extension, if ipakcb is a number greater than zero. To define the names for all package files (input and output) the length of the list of strings should be 2. Default is None. @@ -123,13 +121,8 @@ def __init__( # set filenames filenames = self._prepare_filenames(filenames, 2) - # update external file information with cbc output, if necessary - if ipakcb is not None: - model.add_output_file( - ipakcb, fname=filenames[1], package=self._ftype() - ) - else: - ipakcb = 0 + # cbc output file + self.set_cbc_output_file(ipakcb, model, filenames[1]) # call base package constructor super().__init__( @@ -143,7 +136,6 @@ def __init__( self._generate_heading() self.url = "ghb.html" - self.ipakcb = ipakcb self.no_print = no_print self.np = 0 if options is None: diff --git a/flopy/modflow/mflak.py b/flopy/modflow/mflak.py index 50644a4f29..e8a05730fe 100644 --- a/flopy/modflow/mflak.py +++ b/flopy/modflow/mflak.py @@ -28,13 +28,9 @@ class ModflowLak(Package): Sublakes of multiple-lake systems are considered separate lakes for input purposes. The variable NLAKES is used, with certain internal assumptions and approximations, to dimension arrays for the simulation. - ipakcb : int - (ILKCB in MODFLOW documentation) - Whether or not to write cell-by-cell flows (yes if ILKCB> 0, no - otherwise). If ILKCB< 0 and "Save Budget" is specified in the Output - Control or ICBCFL is not equal to 0, the cell-by-cell flows will be - printed in the standard output file. ICBCFL is specified in the input - to the Output Control Option of MODFLOW. + ipakcb : int, optional + Toggles whether cell-by-cell budget data should be saved. If None or zero, + budget data will not be saved (default is None). lwrt : int or list of ints (one per SP) lwrt > 0, suppresses printout from the lake package. Default is 0 (to print budget information) @@ -228,9 +224,9 @@ class ModflowLak(Package): filenames=None the package name will be created using the model name and package extension and the cbc output name will be created using the model name and .cbc extension (for example, modflowtest.cbc), - if ipakcbc is a number greater than zero. If a single string is passed + if ipakcb is a number greater than zero. If a single string is passed the package will be set to the string and cbc output names will be - created using the model name and .cbc extension, if ipakcbc is a + created using the model name and .cbc extension, if ipakcb is a number greater than zero. To define the names for all package files (input and output) the length of the list of strings should be 2. Default is None. @@ -296,13 +292,8 @@ def __init__( break filenames = self._prepare_filenames(filenames, nlen) - # update external file information with cbc output, if necessary - if ipakcb is not None: - model.add_output_file( - ipakcb, fname=filenames[1], package=self._ftype() - ) - else: - ipakcb = 0 + # cbc output file + self.set_cbc_output_file(ipakcb, model, filenames[1]) # table input files if tabdata: @@ -351,7 +342,6 @@ def __init__( options = [] self.options = options self.nlakes = nlakes - self.ipakcb = ipakcb self.theta = theta self.nssitr = nssitr self.sscncr = sscncr diff --git a/flopy/modflow/mflpf.py b/flopy/modflow/mflpf.py index 68eabd729a..dd2e589150 100644 --- a/flopy/modflow/mflpf.py +++ b/flopy/modflow/mflpf.py @@ -24,10 +24,9 @@ class ModflowLpf(Package): model : model object The model object (of type :class:`flopy.modflow.mf.Modflow`) to which this package will be added. - ipakcb : int - A flag that is used to determine if cell-by-cell budget data should be - saved. If ipakcb is non-zero cell-by-cell budget data will be saved. - (default is 0) + ipakcb : int, optional + Toggles whether cell-by-cell budget data should be saved. If None or zero, + budget data will not be saved (default is None). hdry : float Is the head that is assigned to cells that are converted to dry during a simulation. Although this value plays no role in the model @@ -149,9 +148,9 @@ class ModflowLpf(Package): filenames=None the package name will be created using the model name and package extension and the cbc output name will be created using the model name and .cbc extension (for example, modflowtest.cbc), - if ipakcbc is a number greater than zero. If a single string is passed + if ipakcb is a number greater than zero. If a single string is passed the package will be set to the string and cbc output name will be - created using the model name and .cbc extension, if ipakcbc is a + created using the model name and .cbc extension, if ipakcb is a number greater than zero. To define the names for all package files (input and output) the length of the list of strings should be 2. Default is None. @@ -220,13 +219,8 @@ def __init__( # set filenames filenames = self._prepare_filenames(filenames, 2) - # update external file information with cbc output, if necessary - if ipakcb is not None: - model.add_output_file( - ipakcb, fname=filenames[1], package=self._ftype() - ) - else: - ipakcb = 0 + # cbc output file + self.set_cbc_output_file(ipakcb, model, filenames[1]) # call base package constructor super().__init__( @@ -242,7 +236,6 @@ def __init__( nrow, ncol, nlay, nper = self.parent.nrow_ncol_nlay_nper # item 1 - self.ipakcb = ipakcb self.hdry = ( hdry # Head in cells that are converted to dry during a simulation ) diff --git a/flopy/modflow/mfmnw1.py b/flopy/modflow/mfmnw1.py index ba80376c27..316f323138 100644 --- a/flopy/modflow/mfmnw1.py +++ b/flopy/modflow/mfmnw1.py @@ -19,10 +19,9 @@ class ModflowMnw1(Package): this package will be added. mxmnw : integer maximum number of multi-node wells to be simulated - ipakcb : integer - A flag that is used to determine if cell-by-cell budget data should be - saved. If ipakcb is non-zero cell-by-cell budget data will be saved. - (default is 0). + ipakcb : int, optional + Toggles whether cell-by-cell budget data should be saved. If None or zero, + budget data will not be saved (default is None). iwelpt : integer verbosity flag nomoiter : integer @@ -103,13 +102,8 @@ def __init__( # set filenames filenames = self._prepare_filenames(filenames, 2) - # update external file information with cbc output, if necessary - if ipakcb is not None: - model.add_output_file( - ipakcb, fname=filenames[1], package=self._ftype() - ) - else: - ipakcb = 0 + # cbc output file + self.set_cbc_output_file(ipakcb, model, filenames[1]) # call base package constructor super().__init__( @@ -126,7 +120,6 @@ def __init__( self.mxmnw = ( mxmnw # -maximum number of multi-node wells to be simulated ) - self.ipakcb = ipakcb self.iwelpt = iwelpt # -verbosity flag self.nomoiter = nomoiter # -integer indicating the number of iterations for which flow in MNW wells is calculated self.kspref = kspref # -alphanumeric key indicating which set of water levels are to be used as reference values for calculating drawdown diff --git a/flopy/modflow/mfmnw2.py b/flopy/modflow/mfmnw2.py index 7a9a5548f5..fd8f1d764f 100644 --- a/flopy/modflow/mfmnw2.py +++ b/flopy/modflow/mfmnw2.py @@ -901,19 +901,9 @@ class ModflowMnw2(Package): value of "NODTOT". The model will then reset "MNWMAX" to its absolute value. The value of "ipakcb" will become the third value on that line, etc. - ipakcb : int - is a flag and a unit number: - if ipakcb > 0, then it is the unit number to which MNW cell-by-cell - flow terms will be recorded whenever cell-by-cell budget data are - written to a file (as determined by the outputcontrol options of - MODFLOW). - if ipakcb = 0, then MNW cell-by-cell flow terms will not be printed - or recorded. - if ipakcb < 0, then well injection or withdrawal rates and water - levels in the well and its multiple cells will be printed in - the main MODFLOW listing (output) file whenever cell-by-cell - budget data are written to a file (as determined by the output - control options of MODFLOW). + ipakcb : int, optional + Toggles whether cell-by-cell budget data should be saved. If None or zero, + budget data will not be saved (default is None). mnwprnt : integer Flag controlling the level of detail of information about multi-node wells to be written to the main MODFLOW listing (output) file. @@ -966,9 +956,9 @@ class ModflowMnw2(Package): filenames=None the package name will be created using the model name and package extension and the cbc output name will be created using the model name and .cbc extension (for example, modflowtest.cbc), - if ipakcbc is a number greater than zero. If a single string is passed + if ipakcb is a number greater than zero. If a single string is passed the package will be set to the string and cbc output names will be - created using the model name and .cbc extension, if ipakcbc is a + created using the model name and .cbc extension, if ipakcb is a number greater than zero. To define the names for all package files (input and output) the length of the list of strings should be 2. Default is None. @@ -1001,7 +991,7 @@ def __init__( model, mnwmax=0, nodtot=None, - ipakcb=0, + ipakcb=None, mnwprnt=0, aux=[], node_data=None, @@ -1020,13 +1010,8 @@ def __init__( # set filenames filenames = self._prepare_filenames(filenames, 2) - # update external file information with cbc output, if necessary - if ipakcb is not None: - model.add_output_file( - ipakcb, fname=filenames[1], package=self._ftype() - ) - else: - ipakcb = 0 + # cbc output file + self.set_cbc_output_file(ipakcb, model, filenames[1]) # call base package constructor super().__init__( @@ -1050,7 +1035,6 @@ def __init__( # maximum number of multi-node wells to be simulated self.mnwmax = int(mnwmax) self.nodtot = nodtot # user-specified maximum number of nodes - self.ipakcb = ipakcb self.mnwprnt = int(mnwprnt) # -verbosity flag self.aux = aux # -list of optional auxiliary parameters diff --git a/flopy/modflow/mfrch.py b/flopy/modflow/mfrch.py index eae2a0e16b..70861a9c4e 100644 --- a/flopy/modflow/mfrch.py +++ b/flopy/modflow/mfrch.py @@ -25,10 +25,9 @@ class ModflowRch(Package): model : model object The model object (of type :class:`flopy.modflow.mf.Modflow`) to which this package will be added. - ipakcb : int - A flag that is used to determine if cell-by-cell budget data should be - saved. If ipakcb is non-zero cell-by-cell budget data will be saved. - (default is 0). + ipakcb : int, optional + Toggles whether cell-by-cell budget data should be saved. If None or zero, + budget data will not be saved (default is None). nrchop : int is the recharge option code. 1: Recharge to top grid layer only @@ -49,9 +48,9 @@ class ModflowRch(Package): filenames=None the package name will be created using the model name and package extension and the cbc output name will be created using the model name and .cbc extension (for example, modflowtest.cbc), - if ipakcbc is a number greater than zero. If a single string is passed + if ipakcb is a number greater than zero. If a single string is passed the package will be set to the string and cbc output names will be - created using the model name and .cbc extension, if ipakcbc is a + created using the model name and .cbc extension, if ipakcb is a number greater than zero. To define the names for all package files (input and output) the length of the list of strings should be 2. Default is None. @@ -108,13 +107,8 @@ def __init__( # set filenames filenames = self._prepare_filenames(filenames, 2) - # update external file information with cbc output, if necessary - if ipakcb is not None: - model.add_output_file( - ipakcb, fname=filenames[1], package=self._ftype() - ) - else: - ipakcb = 0 + # cbc output file + self.set_cbc_output_file(ipakcb, model, filenames[1]) # call base package constructor super().__init__( @@ -130,7 +124,6 @@ def __init__( self.url = "rch.html" self.nrchop = nrchop - self.ipakcb = ipakcb rech_u2d_shape = get_pak_vals_shape(model, rech) irch_u2d_shape = get_pak_vals_shape(model, irch) diff --git a/flopy/modflow/mfriv.py b/flopy/modflow/mfriv.py index a557da18a3..7cbdf57c3f 100644 --- a/flopy/modflow/mfriv.py +++ b/flopy/modflow/mfriv.py @@ -23,10 +23,9 @@ class ModflowRiv(Package): model : model object The model object (of type :class:`flopy.modflow.mf.Modflow`) to which this package will be added. - ipakcb : int - A flag that is used to determine if cell-by-cell budget data should be - saved. If ipakcb is non-zero cell-by-cell budget data will be saved. - (default is 0). + ipakcb : int, optional + Toggles whether cell-by-cell budget data should be saved. If None or zero, + budget data will not be saved (default is None). stress_period_data : list, recarray, dataframe, or dictionary of boundaries. Each river cell is defined through definition of layer (int), row (int), column (int), stage (float), cond (float), @@ -76,9 +75,9 @@ class ModflowRiv(Package): filenames=None the package name will be created using the model name and package extension and the cbc output name will be created using the model name and .cbc extension (for example, modflowtest.cbc), - if ipakcbc is a number greater than zero. If a single string is passed + if ipakcb is a number greater than zero. If a single string is passed the package will be set to the string and cbc output names will be - created using the model name and .cbc extension, if ipakcbc is a + created using the model name and .cbc extension, if ipakcb is a number greater than zero. To define the names for all package files (input and output) the length of the list of strings should be 2. Default is None. @@ -131,13 +130,8 @@ def __init__( # set filenames filenames = self._prepare_filenames(filenames, 2) - # update external file information with cbc output, if necessary - if ipakcb is not None: - model.add_output_file( - ipakcb, fname=filenames[1], package=self._ftype() - ) - else: - ipakcb = 0 + # cbc output file + self.set_cbc_output_file(ipakcb, model, filenames[1]) # call base package constructor super().__init__( @@ -151,7 +145,6 @@ def __init__( self._generate_heading() self.url = "riv.html" - self.ipakcb = ipakcb self.mxactr = 0 self.np = 0 if options is None: diff --git a/flopy/modflow/mfsfr2.py b/flopy/modflow/mfsfr2.py index 1287948070..bc030770be 100644 --- a/flopy/modflow/mfsfr2.py +++ b/flopy/modflow/mfsfr2.py @@ -68,14 +68,9 @@ class ModflowSfr2(Package): computing leakage between each stream reach and active model cell. Value is in units of length. Usually a value of 0.0001 is sufficient when units of feet or meters are used in model. - ipakcb : integer - An integer value used as a flag for writing stream-aquifer leakage - values. If ipakcb > 0, unformatted leakage between each stream reach - and corresponding model cell will be saved to the main cell-by-cell - budget file whenever when a cell-by-cell budget has been specified in - Output Control (see Harbaugh and others, 2000, pages 52-55). If - ipakcb = 0, leakage values will not be printed or saved. Printing to - the listing file (ipakcb < 0) is not supported. + ipakcb : int, optional + Toggles whether cell-by-cell budget data should be saved. If None or zero, + budget data will not be saved (default is None). istcb2 : integer An integer value used as a flag for writing to a separate formatted file all information on inflows and outflows from each reach; on @@ -228,7 +223,7 @@ class ModflowSfr2(Package): filenames=None the package name will be created using the model name and package extension and the cbc output and sfr output name will be created using the model name and .cbc the .sfr.bin/.sfr.out extensions - (for example, modflowtest.cbc, and modflowtest.sfr.bin), if ipakcbc and + (for example, modflowtest.cbc, and modflowtest.sfr.bin), if ipakcb and istcb2 are numbers greater than zero. If a single string is passed the package name will be set to the string and other uzf output files will be set to the model name with the appropriate output file extensions. @@ -358,13 +353,8 @@ def __init__( # set filenames filenames = self._prepare_filenames(filenames, 3) - # update external file information with cbc output, if necessary - if ipakcb is not None: - model.add_output_file( - ipakcb, fname=filenames[1], package=self._ftype() - ) - else: - ipakcb = 0 + # cbc output file + self.set_cbc_output_file(ipakcb, model, filenames[1]) # add sfr flow output file if istcb2 is not None: @@ -451,7 +441,6 @@ def __init__( dleak # tolerance level of stream depth used in computing leakage ) - self.ipakcb = ipakcb # flag; unit number for writing table of SFR output to text file self.istcb2 = istcb2 diff --git a/flopy/modflow/mfstr.py b/flopy/modflow/mfstr.py index cb480f5235..07cb545f49 100644 --- a/flopy/modflow/mfstr.py +++ b/flopy/modflow/mfstr.py @@ -47,10 +47,9 @@ class ModflowStr(Package): The constant must be multiplied by 86,400 when using time units of days in the simulation. If ICALC is 0, const can be any real value. (default is 86400.) - ipakcb : int - A flag that is used to determine if cell-by-cell budget data should be - saved. If ipakcb is non-zero cell-by-cell budget data will be saved. - (default is 0). + ipakcb : int, optional + Toggles whether cell-by-cell budget data should be saved. If None or zero, + budget data will not be saved (default is None). istcb2 : int A flag that is used flag and a unit number for the option to store streamflow out of each reach in an unformatted (binary) file. @@ -191,11 +190,11 @@ class ModflowStr(Package): filenames=None the package name will be created using the model name and package extension and the cbc output and str output name will be created using the model name and .cbc the .sfr.bin/.sfr.out extensions - (for example, modflowtest.cbc, and modflowtest.str.bin), if ipakcbc and + (for example, modflowtest.cbc, and modflowtest.str.bin), if ipakcb and istcb2 are numbers greater than zero. If a single string is passed the package will be set to the string and cbc and sf routput names will be created using the model name and .cbc and .str.bin/.str.out - extensions, if ipakcbc and istcb2 are numbers greater than zero. To + extensions, if ipakcb and istcb2 are numbers greater than zero. To define the names for all package files (input and output) the length of the list of strings should be 3. Default is None. @@ -250,13 +249,8 @@ def __init__( # set filenames filenames = self._prepare_filenames(filenames, 3) - # update external file information with cbc output, if necessary - if ipakcb is not None: - model.add_output_file( - ipakcb, fname=filenames[1], package=self._ftype() - ) - else: - ipakcb = 0 + # cbc output file + self.set_cbc_output_file(ipakcb, model, filenames[1]) if istcb2 is not None: model.add_output_file( @@ -282,7 +276,6 @@ def __init__( self.ntrib = ntrib self.ndiv = ndiv self.const = const - self.ipakcb = ipakcb self.istcb2 = istcb2 # issue exception if ntrib is greater than 10 diff --git a/flopy/modflow/mfsub.py b/flopy/modflow/mfsub.py index f56d14e653..0453ed2a91 100644 --- a/flopy/modflow/mfsub.py +++ b/flopy/modflow/mfsub.py @@ -22,10 +22,9 @@ class ModflowSub(Package): model : model object The model object (of type :class:`flopy.modflow.mf.Modflow`) to which this package will be added. - ipakcb : int - A flag that is used to determine if cell-by-cell budget data should be - saved. If ipakcb is non-zero cell-by-cell budget data will be saved. - (default is 0). + ipakcb : int, optional + Toggles whether cell-by-cell budget data should be saved. If None or zero, + budget data will not be saved (default is None). isuboc : int isuboc is a flag used to control output of information generated by the SUB Package. (default is 0). @@ -254,13 +253,8 @@ def __init__( # set filenames filenames = self._prepare_filenames(filenames, 9) - # update external file information with cbc output, if necessary - if ipakcb is not None: - model.add_output_file( - ipakcb, fname=filenames[1], package=self._ftype() - ) - else: - ipakcb = 0 + # cbc output file + self.set_cbc_output_file(ipakcb, model, filenames[1]) if idsave is not None: model.add_output_file( @@ -312,7 +306,6 @@ def __init__( self._generate_heading() self.url = "sub.html" - self.ipakcb = ipakcb self.isuboc = isuboc self.idsave = idsave self.idrest = idrest diff --git a/flopy/modflow/mfswi2.py b/flopy/modflow/mfswi2.py index a28ef5e9f7..63acd3b947 100644 --- a/flopy/modflow/mfswi2.py +++ b/flopy/modflow/mfswi2.py @@ -26,10 +26,9 @@ class ModflowSwi2(Package): flag indicating the density distribution. (default is 1). iswizt : int unit number for zeta output. (default is None). - ipakcb : int - A flag that is used to determine if cell-by-cell budget data should be - saved. If ipakcb is non-zero cell-by-cell budget data will be saved. - (default is None). + ipakcb : int, optional + Toggles whether cell-by-cell budget data should be saved. If None or zero, + budget data will not be saved (default is None). iswiobs : int flag and unit number SWI2 observation output. (default is 0). options : list of strings @@ -242,13 +241,8 @@ def __init__( else: iswizt = 0 - # update external file information with swi2 cell-by-cell output, - # if necessary - if ipakcb is not None: - fname = filenames[2] - model.add_output_file(ipakcb, fname=fname, package=self._ftype()) - else: - ipakcb = 0 + # cbc output file + self.set_cbc_output_file(ipakcb, model, filenames[2]) # Process observations if nobs != 0: @@ -338,8 +332,6 @@ def __init__( self.nobs = nobs self.iswizt = iswizt self.iswiobs = iswiobs - # set cbc unit - self.ipakcb = ipakcb # set solver flags self.nsolver = nsolver diff --git a/flopy/modflow/mfswt.py b/flopy/modflow/mfswt.py index 4d8d1c7650..8c9e67f757 100644 --- a/flopy/modflow/mfswt.py +++ b/flopy/modflow/mfswt.py @@ -22,10 +22,9 @@ class ModflowSwt(Package): model : model object The model object (of type :class:`flopy.modflow.mf.Modflow`) to which this package will be added. - ipakcb : int - A flag that is used to determine if cell-by-cell budget data should be - saved. If ipakcb is non-zero cell-by-cell budget data will be saved. - (default is 0). + ipakcb : int, optional + Toggles whether cell-by-cell budget data should be saved. If None or zero, + budget data will not be saved (default is None). iswtoc : int iswtoc is a flag used to control output of information generated by the SUB Package. (default is 0). @@ -193,7 +192,7 @@ class ModflowSwt(Package): filenames=None the package name will be created using the model name and package extension and the cbc output name and other swt output files will be created using the model name and .cbc and swt output - extensions (for example, modflowtest.cbc), if ipakcbc and other + extensions (for example, modflowtest.cbc), if ipakcb and other swt output files (dataset 16) are numbers greater than zero. If a single string is passed the package name will be set to the string and other swt output files will be set to the model name with @@ -368,13 +367,8 @@ def __init__( # set filenames filenames = self._prepare_filenames(filenames, 15) - # update external file information with cbc output, if necessary - if ipakcb is not None: - model.add_output_file( - ipakcb, fname=filenames[1], package=self._ftype() - ) - else: - ipakcb = 0 + # cbc output file + self.set_cbc_output_file(ipakcb, model, filenames[1]) item16_extensions = [ "swt_subsidence.hds", @@ -420,7 +414,6 @@ def __init__( self._generate_heading() self.url = "swt.html" - self.ipakcb = ipakcb self.iswtoc = iswtoc self.nsystm = nsystm @@ -624,9 +617,6 @@ def load(cls, f, model, ext_unit_dict=None): int(t[6]), ) - # if ipakcb > 0: - # ipakcb = 53 - # read dataset 2 lnwt = None if nsystm > 0: diff --git a/flopy/modflow/mfupw.py b/flopy/modflow/mfupw.py index 093ef0b4f5..e26097ff1d 100644 --- a/flopy/modflow/mfupw.py +++ b/flopy/modflow/mfupw.py @@ -25,10 +25,9 @@ class ModflowUpw(Package): model : model object The model object (of type :class:`flopy.modflow.mf.Modflow`) to which this package will be added. - ipakcb : int - A flag that is used to determine if cell-by-cell budget data should be - saved. If ipakcb is non-zero cell-by-cell budget data will be saved. - (default is 0). + ipakcb : int, optional + Toggles whether cell-by-cell budget data should be saved. If None or zero, + budget data will not be saved (default is None). hdry : float Is the head that is assigned to cells that are converted to dry during a simulation. Although this value plays no role in the model @@ -108,9 +107,9 @@ class ModflowUpw(Package): filenames=None the package name will be created using the model name and package extension and the cbc output name will be created using the model name and .cbc extension (for example, modflowtest.cbc), - if ipakcbc is a number greater than zero. If a single string is passed + if ipakcb is a number greater than zero. If a single string is passed the package will be set to the string and cbc output name will be - created using the model name and .cbc extension, if ipakcbc is a + created using the model name and .cbc extension, if ipakcb is a number greater than zero. To define the names for all package files (input and output) the length of the list of strings should be 2. Default is None. @@ -172,13 +171,8 @@ def __init__( # set filenames filenames = self._prepare_filenames(filenames, 2) - # update external file information with cbc output, if necessary - if ipakcb is not None: - model.add_output_file( - ipakcb, fname=filenames[1], package=self._ftype() - ) - else: - ipakcb = 0 + # cbc output file + self.set_cbc_output_file(ipakcb, model, filenames[1]) # call base package constructor super().__init__( @@ -193,8 +187,6 @@ def __init__( self.url = "upw_upstream_weighting_package.html" nrow, ncol, nlay, nper = self.parent.nrow_ncol_nlay_nper - # item 1 - self.ipakcb = ipakcb # Head in cells that are converted to dry during a simulation self.hdry = hdry # number of UPW parameters diff --git a/flopy/modflow/mfuzf1.py b/flopy/modflow/mfuzf1.py index c680876407..5acc699eb3 100644 --- a/flopy/modflow/mfuzf1.py +++ b/flopy/modflow/mfuzf1.py @@ -7,7 +7,6 @@ `_. """ -import warnings import numpy as np @@ -58,14 +57,9 @@ class ModflowUzf1(Package): specifies whether or not evapotranspiration (ET) will be simulated. ET will not be simulated if IETFLG is zero, otherwise it will be simulated. (default is 0) - ipakcb : integer - flag for writing ground-water recharge, ET, and ground-water - discharge to land surface rates to a separate unformatted file using - subroutine UBUDSV. If ipakcb>0, it is the unit number to which the - cell-by-cell rates will be written when 'SAVE BUDGET' or a non-zero - value for ICBCFL is specified in Output Control. If ipakcb less than - or equal to 0, cell-by-cell rates will not be written to a file. - (default is 57) + ipakcb : int, optional + Toggles whether cell-by-cell budget data should be saved. If None or zero, + budget data will not be saved (default is None). iuzfcb2 : integer flag for writing ground-water recharge, ET, and ground-water discharge to land surface rates to a separate unformatted file using @@ -266,7 +260,7 @@ class ModflowUzf1(Package): and package extension and the cbc output, uzf output, and uzf observation names will be created using the model name and .cbc, uzfcb2.bin, and .uzf#.out extensions (for example, modflowtest.cbc, - and modflowtest.uzfcd2.bin), if ipakcbc, iuzfcb2, and len(uzgag) are + and modflowtest.uzfcd2.bin), if ipakcb, iuzfcb2, and len(uzgag) are numbers greater than zero. For uzf observations the file extension is created using the uzf observation file unit number (for example, for uzf observations written to unit 123 the file extension would be @@ -397,13 +391,8 @@ def __init__( nlen += len(uzgag) filenames = self._prepare_filenames(filenames, nlen) - # update external file information with cbc output, if necessary - if ipakcb is not None: - model.add_output_file( - abs(ipakcb), fname=filenames[1], package=self._ftype() - ) - else: - ipakcb = 0 + # cbc output file + self.set_cbc_output_file(ipakcb, model, filenames[1]) if iuzfcb2 is not None: model.add_output_file( @@ -524,7 +513,6 @@ def __init__( # must be active if IRUNFLG is not zero. self.irunflg = irunflg self.ietflg = ietflg - self.ipakcb = ipakcb self.iuzfcb2 = iuzfcb2 if iuzfopt > 0: self.ntrail2 = ntrail2 diff --git a/flopy/modflow/mfwel.py b/flopy/modflow/mfwel.py index 69a51e42f5..c9e82d3b82 100644 --- a/flopy/modflow/mfwel.py +++ b/flopy/modflow/mfwel.py @@ -24,10 +24,9 @@ class ModflowWel(Package): model : model object The model object (of type :class:`flopy.modflow.mf.Modflow`) to which this package will be added. - ipakcb : int - A flag that is used to determine if cell-by-cell budget data should be - saved. If ipakcb is non-zero cell-by-cell budget data will be saved. - (default is 0). + ipakcb : int, optional + Toggles whether cell-by-cell budget data should be saved. If None or zero, + budget data will not be saved (default is None). stress_period_data : list, recarray, dataframe or dictionary of boundaries. Each well is defined through definition of layer (int), row (int), column (int), flux (float). @@ -73,9 +72,9 @@ class ModflowWel(Package): filenames=None the package name will be created using the model name and package extension and the cbc output name will be created using the model name and .cbc extension (for example, modflowtest.cbc), - if ipakcbc is a number greater than zero. If a single string is passed + if ipakcb is a number greater than zero. If a single string is passed the package will be set to the string and cbc output names will be - created using the model name and .cbc extension, if ipakcbc is a + created using the model name and .cbc extension, if ipakcb is a number greater than zero. To define the names for all package files (input and output) the length of the list of strings should be 2. Default is None. @@ -159,13 +158,8 @@ def __init__( # set filenames filenames = self._prepare_filenames(filenames, 2) - # update external file information with cbc output, if necessary - if ipakcb is not None: - model.add_output_file( - ipakcb, fname=filenames[1], package=self._ftype() - ) - else: - ipakcb = 0 + # cbc output file + self.set_cbc_output_file(ipakcb, model, filenames[1]) # call base package constructor super().__init__( @@ -178,8 +172,6 @@ def __init__( self._generate_heading() self.url = "wel.html" - - self.ipakcb = ipakcb self.np = 0 if options is None: diff --git a/flopy/pakbase.py b/flopy/pakbase.py index 21a8a8ce16..c7b4297d07 100644 --- a/flopy/pakbase.py +++ b/flopy/pakbase.py @@ -1258,3 +1258,12 @@ def load( level=0, ) return pak + + def set_cbc_output_file(self, ipakcb, model, fname): + if ipakcb is None: + ipakcb = 0 + else: + if ipakcb == "default": + ipakcb = 53 + model.add_output_file(ipakcb, fname=fname, package=self._ftype()) + self.ipakcb = ipakcb From 11f518cd68be596c3f07b052f9d3320880c38170 Mon Sep 17 00:00:00 2001 From: wpbonelli Date: Fri, 24 Nov 2023 15:50:26 -0500 Subject: [PATCH 097/101] fix(release.yml): don't regenerate pkgs from mf6 main on release (#2014) --- .github/workflows/release.yml | 3 --- 1 file changed, 3 deletions(-) diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml index e130f3a337..ca93ba9ff3 100644 --- a/.github/workflows/release.yml +++ b/.github/workflows/release.yml @@ -63,9 +63,6 @@ jobs: python -c "import flopy; print(f'FloPy version: {flopy.__version__}')" echo "version=${ver#"v"}" >> $GITHUB_OUTPUT - - name: Update FloPy packages - run: python -m flopy.mf6.utils.generate_classes --ref master --no-backup - - name: Lint Python files run: python scripts/pull_request_prepare.py From 0da6b8ecc9087cc9c68a7d90999f61eb78f0f9ac Mon Sep 17 00:00:00 2001 From: wpbonelli Date: Fri, 24 Nov 2023 21:05:30 -0500 Subject: [PATCH 098/101] fix(release.yml): fix update changelog step (#2015) --- .github/workflows/release.yml | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml index ca93ba9ff3..2377832112 100644 --- a/.github/workflows/release.yml +++ b/.github/workflows/release.yml @@ -102,14 +102,13 @@ jobs: sed -i 's/#### Refactor/#### Refactoring/' CHANGELOG.md sed -i 's/#### Test/#### Testing/' CHANGELOG.md - # prepend changelog to .docs/md/version_changes.md - file=".docs/md/version_changes.md" + # prepend release changelog to cumulative changelog + clog=".docs/md/version_changes.md" temp="version_changes.md" - clog=$(cat CHANGELOG.md) - echo "$(tail -n +2 $file)" > $file - cat $clog > $temp - sudo mv $temp $file - sed -i '1i # Changelog' $file + echo "$(tail -n +2 $clog)" > $clog + cat CHANGELOG.md $clog > $temp + sudo mv $temp $clog + sed -i '1i # Changelog' $clog - name: Upload changelog uses: actions/upload-artifact@v3 From 338f6d073d6ba7edecc12405f3da83ea51e6cc26 Mon Sep 17 00:00:00 2001 From: wpbonelli Date: Fri, 24 Nov 2023 21:51:09 -0500 Subject: [PATCH 099/101] refactor(.gitattributes): exclude examples/data from linguist (#2017) --- .gitattributes | 3 +++ 1 file changed, 3 insertions(+) diff --git a/.gitattributes b/.gitattributes index dd589b5d93..21543a50ab 100644 --- a/.gitattributes +++ b/.gitattributes @@ -16,3 +16,6 @@ # Do not modify the model data in various directories examples/data/** binary .docs/groundwater_paper/uspb/** binary + +# Exclude example data folder +examples/data/** linguist-documentation From 89698c5a32e09c168b9d0caa6136dadb09904d45 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" <41898282+github-actions[bot]@users.noreply.github.com> Date: Sat, 25 Nov 2023 15:51:55 +0000 Subject: [PATCH 100/101] ci(release): set version to 3.5.0, update plugins from DFN files, update changelog --- .docs/md/version_changes.md | 76 +++++++++++++++++++++++++++++++++++++ CITATION.cff | 68 ++++++++++++++++----------------- README.md | 18 ++++----- autotest/test_gridutil.py | 13 +++++-- code.json | 6 +-- docs/PyPI_release.md | 16 ++++---- flopy/DISCLAIMER.md | 14 +++---- flopy/version.py | 4 +- version.txt | 2 +- 9 files changed, 147 insertions(+), 70 deletions(-) diff --git a/.docs/md/version_changes.md b/.docs/md/version_changes.md index d6e05bef81..991872fa7c 100644 --- a/.docs/md/version_changes.md +++ b/.docs/md/version_changes.md @@ -1,4 +1,80 @@ # Changelog +### Version 3.5.0 + +#### New features + +* [feat(simulation+model options)](https://github.com/modflowpy/flopy/commit/a16a379ef61cc16594ac6ac9eadb5ce4c6a4cee1): Dynamically generate simulation options from simulation namefile dfn (#1842). Committed by spaulins-usgs on 2023-07-10. +* [feat(binaryfile)](https://github.com/modflowpy/flopy/commit/79311120d0daeb3ba1281c70faf97fcd92a1fde5): Add reverse() method to HeadFile, CellBudgetFile (#1829). Committed by w-bonelli on 2023-07-29. +* [feat(get-modflow)](https://github.com/modflowpy/flopy/commit/1cb7594d12a2aa385a567613e9f840de63ba7157): Allow specifying repo owner (#1910). Committed by w-bonelli on 2023-08-08. +* [feat(generate_classes)](https://github.com/modflowpy/flopy/commit/e02030876c20d6bf588134bc0ab0cc1c75c0c46e): Create a command-line interface (#1912). Committed by Mike Taves on 2023-08-16. +* [feat(gridutil)](https://github.com/modflowpy/flopy/commit/e20a29814afd97650d6ee2b10f0939893be64551): Add function to help create DISV grid (#1952). Committed by langevin-usgs on 2023-09-18. +* [feat(pandas list)](https://github.com/modflowpy/flopy/commit/6a46a9b75286d7e268d4addc77946246ccb5db56): Fix for handling special case where boundname set but not used (#1982). Committed by scottrp on 2023-10-06. +* [feat(MfSimulationList)](https://github.com/modflowpy/flopy/commit/b34f154362320059d92050b54014ec1fc9c9f09f): Add functionality to parse the mfsim.lst file (#2005). Committed by jdhughes-usgs on 2023-11-14. +* [feat(modflow)](https://github.com/modflowpy/flopy/commit/be104c4a38d2b1f4acfbdd5a49a5b3ddb325f852): Support dataframe for pkg data (#2010). Committed by wpbonelli on 2023-11-22. +* [feat(mfsimlist)](https://github.com/modflowpy/flopy/commit/121ca78f6073813cec082cfd01f483dad6838781): Add functionality to parse memory_print_options (#2009). Committed by jdhughes-usgs on 2023-11-22. + +#### Bug fixes + +* [fix(exchange and gnc package cellids)](https://github.com/modflowpy/flopy/commit/a84d88596f8b8e8f9f8fa074ad8fce626a84ebd0): #1866 (#1871). Committed by spaulins-usgs on 2023-07-11. +* [fix(binaryfile/gridutil)](https://github.com/modflowpy/flopy/commit/90f87f2932e17955dee68ec53347c9d06178f94a): Avoid numpy deprecation warnings (#1868). Committed by w-bonelli on 2023-07-12. +* [fix(binary)](https://github.com/modflowpy/flopy/commit/ed549798333cf2186516d55ffb1e9d9a537a4a0f): Fix binary header information (#1877). Committed by jdhughes-usgs on 2023-07-16. +* [fix(time series)](https://github.com/modflowpy/flopy/commit/8b03916b55fb7a923d13648237147124d77f09f5): Fix for multiple time series attached to single package (#1867) (#1873). Committed by spaulins-usgs on 2023-07-20. +* [fix(check)](https://github.com/modflowpy/flopy/commit/8c330a7890fe732c5489ce6a2d5184a7b2fb70ee): Check now works properly with confined conditions (#1880) (#1882). Committed by spaulins-usgs on 2023-07-27. +* [fix(mtlistfile)](https://github.com/modflowpy/flopy/commit/9a22f639865b9d81716f7fcecdfe33b3e5a352a2): Fix reading MT3D budget (#1899). Committed by Ralf Junghanns on 2023-08-03. +* [fix(check)](https://github.com/modflowpy/flopy/commit/68f0c3bafcd2c806f22b0f75e8d18cc5c4428aa2): Updated flopy's check to work with cellid -1 values (#1885). Committed by spaulins-usgs on 2023-08-06. +* [fix(BaseModel)](https://github.com/modflowpy/flopy/commit/c25c0b34a173fa9788ccd7a0de16da2d02f96e6c): Don't suppress error if exe not found (#1901). Committed by w-bonelli on 2023-08-07. +* [fix(modelgrid)](https://github.com/modflowpy/flopy/commit/99f680feb39cb9450e1ce052024bb3da45e264d8): Retain crs data from classic nam files (#1904). Committed by Mike Taves on 2023-08-10. +* [fix(keyword data)](https://github.com/modflowpy/flopy/commit/3cef778bdf979ac249d84d106ae7db01d5d8e391): Optional keywords (#1920). Committed by spaulins-usgs on 2023-08-16. +* [fix(GridIntersect)](https://github.com/modflowpy/flopy/commit/f2064d8ce5d8bca6340c86b4947070031104104a): Combine list of geometries using unary_union (#1923). Committed by Mike Taves on 2023-08-21. +* [fix(gridintersect)](https://github.com/modflowpy/flopy/commit/021ffaa8c406cda742f239673c4d1a802021b8b9): Add multilinestring tests (#1924). Committed by Davíd Brakenhoff on 2023-08-21. +* [fix(binary file)](https://github.com/modflowpy/flopy/commit/66b0903f4812b6cef3020e7cdfd6e2c2d26f6d52): Was writing binary file information twice to external files (#1925) (#1928). Committed by scottrp on 2023-08-25. +* [fix(ParticleData)](https://github.com/modflowpy/flopy/commit/5d57fb78b27238ea69adbdb3438c8e17db55ddad): Fix docstring, structured default is False (#1935). Committed by w-bonelli on 2023-08-25. +* [fix(generate_classes)](https://github.com/modflowpy/flopy/commit/4f6cd47411b96460495b1f7d6582860a4d655060): Use branch arg if provided (#1938). Committed by w-bonelli on 2023-08-31. +* [fix(remove_model)](https://github.com/modflowpy/flopy/commit/71855bdfcd37e95194b6e5928ae909a6f7ef15fa): Remove_model method fix and tests (#1945). Committed by scottrp on 2023-09-14. +* [fix(export_contours/f)](https://github.com/modflowpy/flopy/commit/9e9d7302cd771ed305c96fbeb406e210c0d9aea7): Support matplotlib 3.8+ (#1951). Committed by wpbonelli on 2023-09-19. +* [fix(resolve_exe)](https://github.com/modflowpy/flopy/commit/d7d7f6649118a84d869522b52b20a4af8a397ebe): Support extensionless abs/rel paths on windows (#1957). Committed by wpbonelli on 2023-09-24. +* [fix(model_splitter.py)](https://github.com/modflowpy/flopy/commit/5b5eb4ec20c833df4117415d433513bf3a1b4aa5): Standardizing naming of iuzno, rno, lakeno, & wellno to ifno (#1963). Committed by Eric Morway on 2023-09-25. +* [fix(mbase)](https://github.com/modflowpy/flopy/commit/637632cfa3ddbd2629dfc257759d36b851ee0063): Warn if duplicate pkgs or units (#1964). Committed by wpbonelli on 2023-09-26. +* [fix(get_structured_faceflows)](https://github.com/modflowpy/flopy/commit/158d2d69fdfdd2cefbcdaa1a403f9103f51d2d70): Cover edge cases, expand tests (#1968). Committed by wpbonelli on 2023-09-29. +* [fix(CellBudgetFile)](https://github.com/modflowpy/flopy/commit/4b105e866e4c58380415a25d131e9d96fc82a4ba): Detect compact fmt by negative nlay (#1966). Committed by wpbonelli on 2023-09-30. +* [fix(pandas list)](https://github.com/modflowpy/flopy/commit/f77989d33955baa76032d9341226385215e45821): Deal with cellids with inconsistent types (#1980). Committed by scottrp on 2023-10-06. +* [fix(model_splitter)](https://github.com/modflowpy/flopy/commit/5ebc216822a86f32c7a2ab9a3735f021475dc0f4): Check keys in mftransient array (#1998). Committed by jdhughes-usgs on 2023-11-13. +* [fix(benchmarks)](https://github.com/modflowpy/flopy/commit/16183c3059f4a9bb10d74423500b566eb41d3d6e): Fix benchmark post-processing (#2004). Committed by wpbonelli on 2023-11-14. +* [fix(MfSimulationList)](https://github.com/modflowpy/flopy/commit/7cc5e6f0b4861e2f9d85eb1e25592721d0427227): Add missing seek to get_runtime method (#2006). Committed by mjr-deltares on 2023-11-15. +* [fix(get_disu_kwargs)](https://github.com/modflowpy/flopy/commit/419ae0e0c15b4f95f6ea334fe49f79e29917130b): Incorrect indexing of delr and delc (#2011). Committed by langevin-usgs on 2023-11-21. +* [fix(PlotCrossSection)](https://github.com/modflowpy/flopy/commit/6e3c7911c1e94a9f8c36563662471c68a7aeeefc): Boundary conditions not plotting for DISU (#2012). Committed by langevin-usgs on 2023-11-21. +* [fix(release.yml)](https://github.com/modflowpy/flopy/commit/11f518cd68be596c3f07b052f9d3320880c38170): Don't regenerate pkgs from mf6 main on release (#2014). Committed by wpbonelli on 2023-11-24. +* [fix(release.yml)](https://github.com/modflowpy/flopy/commit/0da6b8ecc9087cc9c68a7d90999f61eb78f0f9ac): Fix update changelog step (#2015). Committed by wpbonelli on 2023-11-25. + +#### Refactoring + +* [refactor(_set_neighbors)](https://github.com/modflowpy/flopy/commit/9168b3a538a5f604010ad5df17ea7e868b8f913f): Check for closed iverts and remove closing ivert (#1876). Committed by Joshua Larsen on 2023-07-14. +* [refactor(crs)](https://github.com/modflowpy/flopy/commit/02fae7d8084f52e852722338038b6ef7a2691e61): Provide support without pyproj, other deprecations (#1850). Committed by Mike Taves on 2023-07-19. +* [refactor(Notebooks)](https://github.com/modflowpy/flopy/commit/c261ee8145e2791be2c2cc944e5a55cf19ef1560): Apply pyformat and black QA tools (#1879). Committed by Mike Taves on 2023-07-24. +* [refactor](https://github.com/modflowpy/flopy/commit/8c3d7dbbaa753893d746965f9ecc48dc68b274cd): Require pandas>=2.0.0 as core dependency (#1887). Committed by w-bonelli on 2023-08-01. +* [refactor(expired deprecation)](https://github.com/modflowpy/flopy/commit/47e5e359037518f29d14469531db68999d308a0f): Raise AttributeError with Grid.thick and Grid.saturated_thick (#1884). Committed by Mike Taves on 2023-08-01. +* [refactor(pathline/endpoint plots)](https://github.com/modflowpy/flopy/commit/54d6099eb71a7789121bcfde8816b23fa06c258e): Support recarray or dataframe (#1888). Committed by w-bonelli on 2023-08-01. +* [refactor(expired deprecation)](https://github.com/modflowpy/flopy/commit/70b9a37153bf51be7cc06db513b24950ce122fd2): Remove warning for third parameter of Grid.intersect (#1883). Committed by Mike Taves on 2023-08-01. +* [refactor(dependencies)](https://github.com/modflowpy/flopy/commit/dd77e72b9676be8834dcf0a08e6f55fd35e2bc29): Constrain sphinx >=4 (#1898). Committed by w-bonelli on 2023-08-02. +* [refactor(dependencies)](https://github.com/modflowpy/flopy/commit/03fa01fe09db45e30fe2faf4c5160622b78f1b24): Constrain sphinx-rtd-theme >=1 (#1900). Committed by w-bonelli on 2023-08-03. +* [refactor(mf6)](https://github.com/modflowpy/flopy/commit/809e624fe77c4d112e482698434e2199345e68c2): Remove deprecated features (#1894). Committed by w-bonelli on 2023-08-03. +* [refactor(plotutil)](https://github.com/modflowpy/flopy/commit/cf675c6b892a0d484f91465a4ad6600ffe4d518e): Remove deprecated utilities (#1891). Committed by w-bonelli on 2023-08-03. +* [refactor(shapefile_utils)](https://github.com/modflowpy/flopy/commit/6f2da88f9b2d53f94f9c003ddfde3daf6952e3ba): Remove deprecated SpatialReference usages (#1892). Committed by w-bonelli on 2023-08-03. +* [refactor(vtk)](https://github.com/modflowpy/flopy/commit/ca838b17e89114be8952d07e85d88b128d2ec2c3): Remove deprecated export_* functions (#1890). Committed by w-bonelli on 2023-08-03. +* [refactor(generate_classes)](https://github.com/modflowpy/flopy/commit/72370269e2d6933245c65b3c93beb532016593f3): Deprecate branch for ref, introduce repo, test commit hashes (#1907). Committed by w-bonelli on 2023-08-09. +* [refactor(expired deprecation)](https://github.com/modflowpy/flopy/commit/ed3a0cd68ad1acb68fba750f415fa095a19b741a): Remaining references to SpatialReference (#1914). Committed by Mike Taves on 2023-08-11. +* [refactor(Mf6Splitter)](https://github.com/modflowpy/flopy/commit/3a1ae0bba3171561f0b4b78a2bb892fdd0c55e90): Control record and additional splitting checks (#1919). Committed by Joshua Larsen on 2023-08-21. +* [refactor(triangle)](https://github.com/modflowpy/flopy/commit/9ca601259604b1805d8ec1a30b0a883a7b5f019a): Raise if output files not found (#1954). Committed by wpbonelli on 2023-09-22. +* [refactor(recarray_utils)](https://github.com/modflowpy/flopy/commit/941a5f105b10f50fc71b14ea05abf499c4f65fa5): Deprecate functions, use numpy builtins (#1960). Committed by wpbonelli on 2023-09-27. +* [refactor(contour_array)](https://github.com/modflowpy/flopy/commit/4ed68a2be1e6b4f607e27c38ce5eab9f66492e3f): Add layer param, update docstrings, expand tests (#1975). Committed by wpbonelli on 2023-10-18. +* [refactor(model_splitter.py)](https://github.com/modflowpy/flopy/commit/4ef699e10b1fbd80055b85ca00a17b96d1c627ce): (#1994). Committed by Joshua Larsen on 2023-11-01. +* [refactor(modflow)](https://github.com/modflowpy/flopy/commit/696a209416f91bc4a9e61b4392f4218e9bd83408): Remove deprecated features (#1893). Committed by wpbonelli on 2023-11-03. +* [refactor](https://github.com/modflowpy/flopy/commit/af8954b45903680a2931573453228b03ea7beb6b): Support python3.12, simplify tests and dependencies (#1999). Committed by wpbonelli on 2023-11-13. +* [refactor(msfsr2)](https://github.com/modflowpy/flopy/commit/4eea1870cb27aa2976cbf5366a6c2cb942a2f0c4): Write sfr_botm_conflicts.chk to model workspace (#2002). Committed by wpbonelli on 2023-11-14. +* [refactor(shapefile_utils)](https://github.com/modflowpy/flopy/commit/0f8b521407694442c3bb2e50ce3d4f6207f86e48): Warn if fieldname truncated per 10 char limit (#2003). Committed by wpbonelli on 2023-11-14. +* [refactor(pakbase)](https://github.com/modflowpy/flopy/commit/09b59ce8c0e33b32727e0f07e44efadf166ffba2): Standardize ipakcb docstrings/defaults (#2001). Committed by wpbonelli on 2023-11-22. +* [refactor(.gitattributes)](https://github.com/modflowpy/flopy/commit/338f6d073d6ba7edecc12405f3da83ea51e6cc26): Exclude examples/data from linguist (#2017). Committed by wpbonelli on 2023-11-25. + ### Version 3.4.3 #### Bug fixes diff --git a/CITATION.cff b/CITATION.cff index 19aafc1b85..9b28a2660a 100644 --- a/CITATION.cff +++ b/CITATION.cff @@ -3,8 +3,8 @@ message: If you use this software, please cite both the article from preferred-c references, and the software itself. type: software title: FloPy -version: 3.5.0.dev0 -date-released: '2023-09-30' +version: 3.5.0 +date-released: '2023-11-25' doi: 10.5066/F7BK19FH abstract: A Python package to create, run, and post-process MODFLOW-based models. repository-artifact: https://pypi.org/project/flopy @@ -80,35 +80,35 @@ preferred-citation: year: 2023 month: 5 references: - - authors: - - family-names: Bakker - given-names: Mark - orcid: https://orcid.org/0000-0002-5629-2861 - - family-names: Post - given-names: Vincent - orcid: https://orcid.org/0000-0002-9463-3081 - - family-names: Langevin - given-names: Christian D. - orcid: https://orcid.org/0000-0001-5610-9759 - - family-names: Hughes - given-names: Joseph D. - orcid: https://orcid.org/0000-0003-1311-2354 - - family-names: White - given-names: Jeremy T. - orcid: https://orcid.org/0000-0002-4950-1469 - - family-names: Starn - given-names: Jon Jeffrey - orcid: https://orcid.org/0000-0001-5909-0010 - - family-names: Fienen - given-names: Michael N. - orcid: https://orcid.org/0000-0002-7756-4651 - type: article - title: Scripting MODFLOW Model Development Using Python and FloPy - doi: 10.1111/gwat.12413 - journal: Groundwater - volume: 54 - issue: 5 - start: 733 - end: 739 - year: 2016 - month: 9 +- authors: + - family-names: Bakker + given-names: Mark + orcid: https://orcid.org/0000-0002-5629-2861 + - family-names: Post + given-names: Vincent + orcid: https://orcid.org/0000-0002-9463-3081 + - family-names: Langevin + given-names: Christian D. + orcid: https://orcid.org/0000-0001-5610-9759 + - family-names: Hughes + given-names: Joseph D. + orcid: https://orcid.org/0000-0003-1311-2354 + - family-names: White + given-names: Jeremy T. + orcid: https://orcid.org/0000-0002-4950-1469 + - family-names: Starn + given-names: Jon Jeffrey + orcid: https://orcid.org/0000-0001-5909-0010 + - family-names: Fienen + given-names: Michael N. + orcid: https://orcid.org/0000-0002-7756-4651 + type: article + title: Scripting MODFLOW Model Development Using Python and FloPy + doi: 10.1111/gwat.12413 + journal: Groundwater + volume: 54 + issue: 5 + start: 733 + end: 739 + year: 2016 + month: 9 diff --git a/README.md b/README.md index aa4b71c9fc..148a295596 100644 --- a/README.md +++ b/README.md @@ -1,7 +1,7 @@ flopy3 -### Version 3.5.0.dev0 (preliminary) +### Version 3.5.0 [![flopy continuous integration](https://github.com/modflowpy/flopy/actions/workflows/commit.yml/badge.svg?branch=develop)](https://github.com/modflowpy/flopy/actions/workflows/commit.yml) [![Read the Docs](https://github.com/modflowpy/flopy/actions/workflows/rtd.yml/badge.svg?branch=develop)](https://github.com/modflowpy/flopy/actions/workflows/rtd.yml) @@ -142,7 +142,7 @@ How to Cite ##### ***Software/Code citation for FloPy:*** -[Bakker, Mark, Post, Vincent, Hughes, J. D., Langevin, C. D., White, J. T., Leaf, A. T., Paulinski, S. R., Bellino, J. C., Morway, E. D., Toews, M. W., Larsen, J. D., Fienen, M. N., Starn, J. J., Brakenhoff, D. A., and Bonelli, W. P., 2023, FloPy v3.5.0.dev0 (preliminary): U.S. Geological Survey Software Release, 30 September 2023, https://doi.org/10.5066/F7BK19FH](https://doi.org/10.5066/F7BK19FH) +[Bakker, Mark, Post, Vincent, Hughes, J. D., Langevin, C. D., White, J. T., Leaf, A. T., Paulinski, S. R., Bellino, J. C., Morway, E. D., Toews, M. W., Larsen, J. D., Fienen, M. N., Starn, J. J., Brakenhoff, D. A., and Bonelli, W. P., 2023, FloPy v3.5.0: U.S. Geological Survey Software Release, 25 November 2023, https://doi.org/10.5066/F7BK19FH](https://doi.org/10.5066/F7BK19FH) Additional FloPy Related Publications @@ -167,12 +167,10 @@ MODFLOW Resources Disclaimer ---------- -This software is preliminary or provisional and is subject to revision. It is -being provided to meet the need for timely best science. This software is -provided "as is" and "as-available", and makes no representations or warranties -of any kind concerning the software, whether express, implied, statutory, or -other. This includes, without limitation, warranties of title, -merchantability, fitness for a particular purpose, non-infringement, absence -of latent or other defects, accuracy, or the presence or absence of errors, -whether or not known or discoverable. +This software is provided "as is" and "as-available", and makes no +representations or warranties of any kind concerning the software, whether +express, implied, statutory, or other. This includes, without limitation, +warranties of title, merchantability, fitness for a particular purpose, +non-infringement, absence of latent or other defects, accuracy, or the +presence or absence of errors, whether or not known or discoverable. diff --git a/autotest/test_gridutil.py b/autotest/test_gridutil.py index 9be73a5430..48522c2c9f 100644 --- a/autotest/test_gridutil.py +++ b/autotest/test_gridutil.py @@ -86,8 +86,8 @@ def test_get_lni_infers_layer_count_when_int_ncpl(ncpl, nodes, expected): 1, # nlay 3, # nrow 4, # ncol - np.array(4 * [4.]), # delr - np.array(3 * [3.]), # delc + np.array(4 * [4.0]), # delr + np.array(3 * [3.0]), # delc np.array([-10]), # top np.array([-30.0]), # botm ), @@ -95,7 +95,14 @@ def test_get_lni_infers_layer_count_when_int_ncpl(ncpl, nodes, expected): ) def test_get_disu_kwargs(nlay, nrow, ncol, delr, delc, tp, botm): kwargs = get_disu_kwargs( - nlay=nlay, nrow=nrow, ncol=ncol, delr=delr, delc=delc, tp=tp, botm=botm, return_vertices=True + nlay=nlay, + nrow=nrow, + ncol=ncol, + delr=delr, + delc=delc, + tp=tp, + botm=botm, + return_vertices=True, ) from pprint import pprint diff --git a/code.json b/code.json index d449448d7c..127269ee7d 100644 --- a/code.json +++ b/code.json @@ -1,6 +1,6 @@ [ { - "status": "Preliminary", + "status": "Release", "languages": [ "python" ], @@ -29,9 +29,9 @@ "downloadURL": "https://code.usgs.gov/usgs/modflow/flopy/archive/master.zip", "vcs": "git", "laborHours": -1, - "version": "3.5.0.dev0", + "version": "3.5.0", "date": { - "metadataLastUpdated": "2023-09-30" + "metadataLastUpdated": "2023-11-25" }, "organization": "U.S. Geological Survey", "permissions": { diff --git a/docs/PyPI_release.md b/docs/PyPI_release.md index 46c9d45760..daef4656a8 100644 --- a/docs/PyPI_release.md +++ b/docs/PyPI_release.md @@ -30,18 +30,16 @@ How to Cite *Software/Code citation for FloPy:* -[Bakker, Mark, Post, Vincent, Hughes, J. D., Langevin, C. D., White, J. T., Leaf, A. T., Paulinski, S. R., Bellino, J. C., Morway, E. D., Toews, M. W., Larsen, J. D., Fienen, M. N., Starn, J. J., Brakenhoff, D. A., and Bonelli, W. P., 2023, FloPy v3.5.0.dev0 (preliminary): U.S. Geological Survey Software Release, 30 September 2023, https://doi.org/10.5066/F7BK19FH](https://doi.org/10.5066/F7BK19FH) +[Bakker, Mark, Post, Vincent, Hughes, J. D., Langevin, C. D., White, J. T., Leaf, A. T., Paulinski, S. R., Bellino, J. C., Morway, E. D., Toews, M. W., Larsen, J. D., Fienen, M. N., Starn, J. J., Brakenhoff, D. A., and Bonelli, W. P., 2023, FloPy v3.5.0: U.S. Geological Survey Software Release, 25 November 2023, https://doi.org/10.5066/F7BK19FH](https://doi.org/10.5066/F7BK19FH) Disclaimer ---------- -This software is preliminary or provisional and is subject to revision. It is -being provided to meet the need for timely best science. This software is -provided "as is" and "as-available", and makes no representations or warranties -of any kind concerning the software, whether express, implied, statutory, or -other. This includes, without limitation, warranties of title, -merchantability, fitness for a particular purpose, non-infringement, absence -of latent or other defects, accuracy, or the presence or absence of errors, -whether or not known or discoverable. +This software is provided "as is" and "as-available", and makes no +representations or warranties of any kind concerning the software, whether +express, implied, statutory, or other. This includes, without limitation, +warranties of title, merchantability, fitness for a particular purpose, +non-infringement, absence of latent or other defects, accuracy, or the +presence or absence of errors, whether or not known or discoverable. diff --git a/flopy/DISCLAIMER.md b/flopy/DISCLAIMER.md index 81ba20d032..0e68af88f7 100644 --- a/flopy/DISCLAIMER.md +++ b/flopy/DISCLAIMER.md @@ -1,11 +1,9 @@ Disclaimer ---------- -This software is preliminary or provisional and is subject to revision. It is -being provided to meet the need for timely best science. This software is -provided "as is" and "as-available", and makes no representations or warranties -of any kind concerning the software, whether express, implied, statutory, or -other. This includes, without limitation, warranties of title, -merchantability, fitness for a particular purpose, non-infringement, absence -of latent or other defects, accuracy, or the presence or absence of errors, -whether or not known or discoverable. +This software is provided "as is" and "as-available", and makes no +representations or warranties of any kind concerning the software, whether +express, implied, statutory, or other. This includes, without limitation, +warranties of title, merchantability, fitness for a particular purpose, +non-infringement, absence of latent or other defects, accuracy, or the +presence or absence of errors, whether or not known or discoverable. diff --git a/flopy/version.py b/flopy/version.py index 0f9a359cc0..cd9f5f4593 100644 --- a/flopy/version.py +++ b/flopy/version.py @@ -1,3 +1,3 @@ -# flopy version file automatically created using update_version.py on September 30, 2023 14:44:00 +# flopy version file automatically created using update_version.py on November 25, 2023 15:44:45 -__version__ = "3.5.0.dev0" +__version__ = "3.5.0" diff --git a/version.txt b/version.txt index 6a1c5d665e..e5b820341f 100644 --- a/version.txt +++ b/version.txt @@ -1 +1 @@ -3.5.0.dev0 +3.5.0 \ No newline at end of file From e906f1b3e764b0fd605ea066b42705f71cc67975 Mon Sep 17 00:00:00 2001 From: wpbonelli Date: Sat, 25 Nov 2023 11:35:41 -0500 Subject: [PATCH 101/101] deduplicate fixes from v3.4.[2,3] in changelog --- .docs/md/version_changes.md | 18 +----------------- 1 file changed, 1 insertion(+), 17 deletions(-) diff --git a/.docs/md/version_changes.md b/.docs/md/version_changes.md index 991872fa7c..9f97911b96 100644 --- a/.docs/md/version_changes.md +++ b/.docs/md/version_changes.md @@ -16,27 +16,10 @@ #### Bug fixes * [fix(exchange and gnc package cellids)](https://github.com/modflowpy/flopy/commit/a84d88596f8b8e8f9f8fa074ad8fce626a84ebd0): #1866 (#1871). Committed by spaulins-usgs on 2023-07-11. -* [fix(binaryfile/gridutil)](https://github.com/modflowpy/flopy/commit/90f87f2932e17955dee68ec53347c9d06178f94a): Avoid numpy deprecation warnings (#1868). Committed by w-bonelli on 2023-07-12. -* [fix(binary)](https://github.com/modflowpy/flopy/commit/ed549798333cf2186516d55ffb1e9d9a537a4a0f): Fix binary header information (#1877). Committed by jdhughes-usgs on 2023-07-16. -* [fix(time series)](https://github.com/modflowpy/flopy/commit/8b03916b55fb7a923d13648237147124d77f09f5): Fix for multiple time series attached to single package (#1867) (#1873). Committed by spaulins-usgs on 2023-07-20. -* [fix(check)](https://github.com/modflowpy/flopy/commit/8c330a7890fe732c5489ce6a2d5184a7b2fb70ee): Check now works properly with confined conditions (#1880) (#1882). Committed by spaulins-usgs on 2023-07-27. -* [fix(mtlistfile)](https://github.com/modflowpy/flopy/commit/9a22f639865b9d81716f7fcecdfe33b3e5a352a2): Fix reading MT3D budget (#1899). Committed by Ralf Junghanns on 2023-08-03. -* [fix(check)](https://github.com/modflowpy/flopy/commit/68f0c3bafcd2c806f22b0f75e8d18cc5c4428aa2): Updated flopy's check to work with cellid -1 values (#1885). Committed by spaulins-usgs on 2023-08-06. -* [fix(BaseModel)](https://github.com/modflowpy/flopy/commit/c25c0b34a173fa9788ccd7a0de16da2d02f96e6c): Don't suppress error if exe not found (#1901). Committed by w-bonelli on 2023-08-07. * [fix(modelgrid)](https://github.com/modflowpy/flopy/commit/99f680feb39cb9450e1ce052024bb3da45e264d8): Retain crs data from classic nam files (#1904). Committed by Mike Taves on 2023-08-10. -* [fix(keyword data)](https://github.com/modflowpy/flopy/commit/3cef778bdf979ac249d84d106ae7db01d5d8e391): Optional keywords (#1920). Committed by spaulins-usgs on 2023-08-16. -* [fix(GridIntersect)](https://github.com/modflowpy/flopy/commit/f2064d8ce5d8bca6340c86b4947070031104104a): Combine list of geometries using unary_union (#1923). Committed by Mike Taves on 2023-08-21. -* [fix(gridintersect)](https://github.com/modflowpy/flopy/commit/021ffaa8c406cda742f239673c4d1a802021b8b9): Add multilinestring tests (#1924). Committed by Davíd Brakenhoff on 2023-08-21. -* [fix(binary file)](https://github.com/modflowpy/flopy/commit/66b0903f4812b6cef3020e7cdfd6e2c2d26f6d52): Was writing binary file information twice to external files (#1925) (#1928). Committed by scottrp on 2023-08-25. -* [fix(ParticleData)](https://github.com/modflowpy/flopy/commit/5d57fb78b27238ea69adbdb3438c8e17db55ddad): Fix docstring, structured default is False (#1935). Committed by w-bonelli on 2023-08-25. * [fix(generate_classes)](https://github.com/modflowpy/flopy/commit/4f6cd47411b96460495b1f7d6582860a4d655060): Use branch arg if provided (#1938). Committed by w-bonelli on 2023-08-31. * [fix(remove_model)](https://github.com/modflowpy/flopy/commit/71855bdfcd37e95194b6e5928ae909a6f7ef15fa): Remove_model method fix and tests (#1945). Committed by scottrp on 2023-09-14. -* [fix(export_contours/f)](https://github.com/modflowpy/flopy/commit/9e9d7302cd771ed305c96fbeb406e210c0d9aea7): Support matplotlib 3.8+ (#1951). Committed by wpbonelli on 2023-09-19. -* [fix(resolve_exe)](https://github.com/modflowpy/flopy/commit/d7d7f6649118a84d869522b52b20a4af8a397ebe): Support extensionless abs/rel paths on windows (#1957). Committed by wpbonelli on 2023-09-24. * [fix(model_splitter.py)](https://github.com/modflowpy/flopy/commit/5b5eb4ec20c833df4117415d433513bf3a1b4aa5): Standardizing naming of iuzno, rno, lakeno, & wellno to ifno (#1963). Committed by Eric Morway on 2023-09-25. -* [fix(mbase)](https://github.com/modflowpy/flopy/commit/637632cfa3ddbd2629dfc257759d36b851ee0063): Warn if duplicate pkgs or units (#1964). Committed by wpbonelli on 2023-09-26. -* [fix(get_structured_faceflows)](https://github.com/modflowpy/flopy/commit/158d2d69fdfdd2cefbcdaa1a403f9103f51d2d70): Cover edge cases, expand tests (#1968). Committed by wpbonelli on 2023-09-29. -* [fix(CellBudgetFile)](https://github.com/modflowpy/flopy/commit/4b105e866e4c58380415a25d131e9d96fc82a4ba): Detect compact fmt by negative nlay (#1966). Committed by wpbonelli on 2023-09-30. * [fix(pandas list)](https://github.com/modflowpy/flopy/commit/f77989d33955baa76032d9341226385215e45821): Deal with cellids with inconsistent types (#1980). Committed by scottrp on 2023-10-06. * [fix(model_splitter)](https://github.com/modflowpy/flopy/commit/5ebc216822a86f32c7a2ab9a3735f021475dc0f4): Check keys in mftransient array (#1998). Committed by jdhughes-usgs on 2023-11-13. * [fix(benchmarks)](https://github.com/modflowpy/flopy/commit/16183c3059f4a9bb10d74423500b566eb41d3d6e): Fix benchmark post-processing (#2004). Committed by wpbonelli on 2023-11-14. @@ -80,6 +63,7 @@ #### Bug fixes * [fix(export_contours/f)](https://github.com/modflowpy/flopy/commit/30209f2ca2e69289227203e4afd2f33bfceed097): Support matplotlib 3.8+ (#1951). Committed by wpbonelli on 2023-09-19. +* [fix(usg bcf)](https://github.com/modflowpy/flopy/commit/d2dbacb65d2579e42bbd7965c1321a89ccce7d56): ksat util3d call --> util2d call (#1959). Committed by @cnicol-gwlogic on 2023-09-22. * [fix(resolve_exe)](https://github.com/modflowpy/flopy/commit/3522dced8a49dc93fb0140d9ac360a88f31b11bb): Support extensionless abs/rel paths on windows (#1957). Committed by wpbonelli on 2023-09-24. * [fix(mbase)](https://github.com/modflowpy/flopy/commit/b848f968af4179d8618b811cd4fe6f8de66d09cb): Warn if duplicate pkgs or units (#1964). Committed by wpbonelli on 2023-09-26. * [fix(get_structured_faceflows)](https://github.com/modflowpy/flopy/commit/92632d26be2ecb21b6d9d56717faadaa13e08369): Cover edge cases, expand tests (#1968). Committed by wpbonelli on 2023-09-29.