diff --git a/docs/autobuild.sh b/docs/autobuild.sh index 38c02b21a..901ea09a9 100755 --- a/docs/autobuild.sh +++ b/docs/autobuild.sh @@ -8,7 +8,7 @@ REPO="$( realpath "$DOCS/../" )" cd "$REPO/docs" # Build html documentation. -make html +make html || exit 1 # Open documentation in browser. xdg-open "$DOCS/_build/html/index.html" diff --git a/docs/downward.experiment.rst b/docs/downward.experiment.rst index 7b77491d6..d880be032 100644 --- a/docs/downward.experiment.rst +++ b/docs/downward.experiment.rst @@ -1,14 +1,36 @@ +.. _downward-experiment: + :mod:`downward.experiment` --- Fast Downward experiment ======================================================= + +.. note:: + + The :class:`FastDownwardExperiment + ` class makes it easy to write + "standard" experiments with little boilerplate code, but it assumes a rigid + experiment structure: it only allows you to run each added algorithm on each + added task, and individual runs cannot easily be customized. An example for + this is the `2020-09-11-A-cg-vs-ff.py + `_ + experiment. If you need more flexibility, you can use the + :class:`lab.experiment.Experiment` class instead and fill it by using + :class:`FastDownwardAlgorithm `, + :class:`FastDownwardRun `, + :class:`CachedFastDownwardRevision + `, and :class:`Task + ` objects. The `2020-09-11-B-bounded-cost.py + `_ + script shows an example. All of these classes are documented below. + .. autoclass:: downward.experiment.FastDownwardExperiment :members: add_algorithm, add_suite .. _downward-parsers: -Built-in parsers ----------------- +Bundled parsers +--------------- -The following constants are paths to built-in parsers that can be passed +The following constants are paths to default parsers that can be passed to :meth:`exp.add_parser() `. The "Used attributes" and "Parsed attributes" lists describe the dependencies between the parsers. @@ -27,3 +49,26 @@ dependencies between the parsers. .. autoattribute:: downward.experiment.FastDownwardExperiment.PLANNER_PARSER :annotation: + +.. autoclass:: downward.experiment.FastDownwardAlgorithm + :members: + :undoc-members: + :inherited-members: + +.. autoclass:: downward.experiment.FastDownwardRun + :members: + :undoc-members: + +.. autoclass:: downward.cached_revision.CachedFastDownwardRevision + :members: + :undoc-members: + :inherited-members: + +:mod:`downward.suites` --- Select benchmarks +-------------------------------------------- + +.. autoclass:: downward.suites.Task + :members: + :undoc-members: + +.. autofunction:: downward.suites.build_suite diff --git a/docs/downward.tutorial.rst b/docs/downward.tutorial.rst index 4e1879ed8..23783faa8 100644 --- a/docs/downward.tutorial.rst +++ b/docs/downward.tutorial.rst @@ -121,13 +121,12 @@ directories containing a ``.venv`` subdirectory. Run tutorial experiment ----------------------- -The files below are an experiment script for the example experiment, a -``project.py`` module that bundles common functionality for all -experiments related to the project, a parser script, and a script for -collecting results and making reports. You can use the files as a basis -for your own experiments. They are available in the `Lab repo -`_. Copy the -files into ``experiments/cg-vs-ff``. +The files below are two experiment scripts, a ``project.py`` module that bundles +common functionality for all experiments related to the project, a parser +script, and a script for collecting results and making reports. You can use the +files as a basis for your own experiments. They are available in the `Lab repo +`_. Copy the files +into ``experiments/my-exp-dir``. .. highlight:: bash @@ -147,11 +146,28 @@ Run individual steps with :: ./2020-09-11-A-cg-vs-ff.py 2 ./2020-09-11-A-cg-vs-ff.py 3 6 7 +The ``2020-09-11-A-cg-vs-ff.py`` script uses the +:class:`downward.experiment.FastDownwardExperiment` class, which reduces the +amount of code you need to write, but assumes a rigid structure of the +experiment: it only allows you to run each added algorithm on each added task, +and individual runs cannot be customized. If you need more flexibility, you can +employ the :class:`lab.experiment.Experiment` class instead and fill it by using +:class:`FastDownwardAlgorithm `, +:class:`FastDownwardRun `, +:class:`CachedFastDownwardRevision +`, and :class:`Task +` objects. The ``2020-09-11-A-cg-vs-ff.py`` script shows +an example. See the `Downward Lab API `_ for a +reference on all Downward Lab classes. + .. highlight:: python .. literalinclude:: ../examples/downward/2020-09-11-A-cg-vs-ff.py :caption: +.. literalinclude:: ../examples/downward/2020-09-11-B-bounded-cost.py + :caption: + .. literalinclude:: ../examples/downward/project.py :caption: @@ -163,5 +179,5 @@ Run individual steps with :: The `Downward Lab API `_ shows you how to adjust this example to your needs. You may also find the `other example -experiments `_ +experiments `_ useful. diff --git a/docs/faq.rst b/docs/faq.rst index 4edef86b6..2051b7814 100644 --- a/docs/faq.rst +++ b/docs/faq.rst @@ -64,7 +64,7 @@ How can I make reports and plots for results obtained without Lab? ------------------------------------------------------------------ See `report-external-results.py -`_ +`_ for an example. @@ -82,7 +82,7 @@ How can I contribute to Lab? If you'd like to contribute a feature or a bugfix to Lab or Downward Lab, please see `CONTRIBUTING.md -`_. +`_. How can I customize Lab? diff --git a/docs/lab.experiment.rst b/docs/lab.experiment.rst index 126f608ba..d3bc1250d 100644 --- a/docs/lab.experiment.rst +++ b/docs/lab.experiment.rst @@ -57,6 +57,15 @@ Custom command line arguments :exclude-members: build +:class:`CachedRevision` +----------------------- + +.. autoclass:: lab.cached_revision.CachedRevision + :members: + :undoc-members: + :inherited-members: + + .. _parsing: :class:`Parser` @@ -73,6 +82,11 @@ Custom command line arguments -------------------- .. autoclass:: lab.environments.Environment + +Lab supports several built-in environments below. Additionally, support for +`HTCondor `_ clusters is provided by a `third-party +repository `_. + .. autoclass:: lab.environments.LocalEnvironment .. autoclass:: lab.environments.SlurmEnvironment .. autoclass:: lab.environments.BaselSlurmEnvironment diff --git a/docs/news.rst b/docs/news.rst index 03d57e833..56ed6a5be 100644 --- a/docs/news.rst +++ b/docs/news.rst @@ -1,6 +1,37 @@ Changelog ========= +next (unreleased) +----------------- + +Lab +^^^ +* Provide support for HTCondor clusters in external repo and add link to docs (Martín Pozo). + + +Downward Lab +^^^^^^^^^^^^ +* no changes so far + + +v7.4 (2023-08-18) +----------------- + +Lab +^^^ +* Require *revision_cache* parameter in :class:`CachedRevision ` constructor (Jendrik Seipp). +* Add *subdir* option for :class:`CachedRevision ` to support solvers at deeper levels of a repo (Jendrik Seipp). +* Add :meth:`CachedRevision.get_relative_exp_path() ` method to query where cache artefacts will land in the experiment directory (Jendrik Seipp). +* Document :class:`CachedRevision ` class and stabilize its API (Jendrik Seipp). +* Only use documented classes and functions in example experiments (Jendrik Seipp). + +Downward Lab +^^^^^^^^^^^^ +* Add *subdir* option for :class:`CachedFastDownwardRevision ` to support Fast Downward checkouts at deeper levels of a repo (Jendrik Seipp). +* Make :class:`FastDownwardAlgorithm `, :class:`FastDownwardRun ` and :class:`CachedFastDownwardRevision ` classes part of the documented, stable API (Jendrik Seipp). +* Describe :ref:`two main alternatives ` for running Fast Downward experiments (Jendrik Seipp). + + v7.3 (2023-03-03) ----------------- @@ -624,9 +655,9 @@ Downward Lab (``build_successful``) is written in the cache directory. When the cached revision is requested later an error is shown if this file is missing. -* Only use exit codes to reason about error reasons. Merge from FD master if your FD version does not produce meaningful exit codes. +* Only use exit codes to reason about error reasons. Merge from FD main if your FD version does not produce meaningful exit codes. * Preprocess parser: only parse logs and never output files. -* Never copy ``all.groups`` and ``test.groups`` files. Old Fast Downward branches need to merge from master. +* Never copy ``all.groups`` and ``test.groups`` files. Old Fast Downward branches need to merge from main. * Always compress ``output.sas`` (also for ``compact=False``). Use ``xz`` for compressing. diff --git a/downward/cached_revision.py b/downward/cached_revision.py index 2de957dcb..9e40f0591 100644 --- a/downward/cached_revision.py +++ b/downward/cached_revision.py @@ -1,5 +1,3 @@ -import glob -import os.path import subprocess from lab import tools @@ -9,38 +7,48 @@ class CachedFastDownwardRevision(CachedRevision): """This class represents Fast Downward checkouts. - It provides methods for caching and compiling given revisions. + It provides methods for caching compiled revisions, so they can be reused + quickly in different experiments. + """ - def __init__(self, repo, rev, build_options): + def __init__(self, revision_cache, repo, rev, build_options, subdir=""): """ + * *revision_cache*: Path to revision cache. * *repo*: Path to Fast Downward repository. * *rev*: Fast Downward revision. * *build_options*: List of build.py options. + * *subdir*: relative path from *repo* to Fast Downward subdir. """ CachedRevision.__init__( - self, repo, rev, ["./build.py"] + build_options, ["experiments", "misc"] + self, + revision_cache, + repo, + rev, + ["./build.py"] + build_options, + exclude=["experiments", "misc"], + subdir=subdir, ) + # Store for easy retrieval by class users. self.build_options = build_options def _cleanup(self): # Only keep the bin directories in "builds" dir. - for path in glob.glob(os.path.join(self.path, "builds", "*", "*")): - if os.path.basename(path) != "bin": + for path in self.path.glob("builds/*/*"): + if path.name != "bin": tools.remove_path(path) # Remove unneeded files. - tools.remove_path(os.path.join(self.path, "build.py")) + tools.remove_path(self.path / "build.py") # Strip binaries. binaries = [] - for path in glob.glob(os.path.join(self.path, "builds", "*", "bin", "*")): - if os.path.basename(path) in ["downward", "preprocess"]: + for path in self.path.glob("builds/*/bin/*"): + if path.name in ["downward", "preprocess"]: binaries.append(path) - subprocess.call(["strip"] + binaries) + subprocess.check_call(["strip"] + binaries) # Compress src directory. - subprocess.call( - ["tar", "-cf", "src.tar", "--remove-files", "src"], cwd=self.path + subprocess.check_call( + ["tar", "--remove-files", "--xz", "-cf", "src.tar.xz", "src"], cwd=self.path ) - subprocess.call(["xz", "src.tar"], cwd=self.path) diff --git a/downward/experiment.py b/downward/experiment.py index 9d7e75918..006fab458 100644 --- a/downward/experiment.py +++ b/downward/experiment.py @@ -5,6 +5,7 @@ from collections import defaultdict, OrderedDict import logging import os.path +from pathlib import Path from downward import suites from downward.cached_revision import CachedFastDownwardRevision @@ -16,74 +17,94 @@ DOWNWARD_SCRIPTS_DIR = os.path.join(DIR, "scripts") -def _get_solver_resource_name(cached_rev): - return "fast_downward_" + cached_rev.name +class FastDownwardAlgorithm: + """ + A Fast Downward algorithm is the combination of revision, driver options and + component options. + """ + + def __init__( + self, + name: str, + cached_revision: CachedFastDownwardRevision, + driver_options, + component_options, + ): + #: Algorithm name, e.g., ``"rev123:astar-lmcut"``. + self.name = name + #: An instance of :class:`CachedFastDownwardRevision + #: `. + self.cached_revision = cached_revision + #: Driver options, e.g., ``["--build", "debug"]``. + self.driver_options = driver_options + #: Component options, e.g., ``["--search", "astar(lmcut())"]``. + self.component_options = component_options + + def __eq__(self, other): + """Return true iff all components (excluding the name) match.""" + return ( + self.cached_revision == other.cached_revision + and self.driver_options == other.driver_options + and self.component_options == other.component_options + ) + + +# Create alias for backwards compatibility. +_DownwardAlgorithm = FastDownwardAlgorithm class FastDownwardRun(Run): - def __init__(self, exp, algo, task): - Run.__init__(self, exp) - self.algo = algo - self.task = task + """An experiment run that uses *algo* to solve *task*. - self.driver_options = algo.driver_options[:] + See :py:class:`Run ` for inherited methods. + + """ - if self.task.domain_file is None: - self.add_resource("task", self.task.problem_file, "task.sas", symlink=True) + def __init__(self, exp: Experiment, algo: FastDownwardAlgorithm, task: suites.Task): + super().__init__(exp) + driver_options = algo.driver_options[:] + + if task.domain_file is None: + self.add_resource("task", task.problem_file, "task.sas", symlink=True) input_files = ["{task}"] # Without PDDL input files, we can't validate the solution. - self.driver_options.remove("--validate") + driver_options = [opt for opt in driver_options if opt != "--validate"] else: + self.add_resource("domain", task.domain_file, "domain.pddl", symlink=True) self.add_resource( - "domain", self.task.domain_file, "domain.pddl", symlink=True - ) - self.add_resource( - "problem", self.task.problem_file, "problem.pddl", symlink=True + "problem", task.problem_file, "problem.pddl", symlink=True ) input_files = ["{domain}", "{problem}"] + driver = os.path.join( + exp.path, + algo.cached_revision.get_relative_exp_path("fast-downward.py"), + ) self.add_command( "planner", [tools.get_python_executable()] - + ["{" + _get_solver_resource_name(algo.cached_revision) + "}"] - + self.driver_options + + [driver] + + driver_options + input_files + algo.component_options, ) - self._set_properties() + self._set_properties(algo, driver_options, task) - def _set_properties(self): - self.set_property("algorithm", self.algo.name) - self.set_property("repo", self.algo.cached_revision.repo) - self.set_property("local_revision", self.algo.cached_revision.local_rev) - self.set_property("global_revision", self.algo.cached_revision.global_rev) - self.set_property("build_options", self.algo.cached_revision.build_options) - self.set_property("driver_options", self.driver_options) - self.set_property("component_options", self.algo.component_options) + def _set_properties(self, algo, driver_options, task): + self.set_property("algorithm", algo.name) + self.set_property("repo", algo.cached_revision.repo) + self.set_property("local_revision", algo.cached_revision.local_rev) + self.set_property("global_revision", algo.cached_revision.global_rev) + self.set_property("build_options", algo.cached_revision.build_options) + self.set_property("driver_options", driver_options) + self.set_property("component_options", algo.component_options) - for key, value in self.task.properties.items(): + for key, value in task.properties.items(): self.set_property(key, value) self.set_property("experiment_name", self.experiment.name) - - self.set_property("id", [self.algo.name, self.task.domain, self.task.problem]) - - -class _DownwardAlgorithm: - def __init__(self, name, cached_revision, driver_options, component_options): - self.name = name - self.cached_revision = cached_revision - self.driver_options = driver_options - self.component_options = component_options - - def __eq__(self, other): - """Return true iff all components (excluding the name) match.""" - return ( - self.cached_revision == other.cached_revision - and self.driver_options == other.driver_options - and self.component_options == other.component_options - ) + self.set_property("id", [algo.name, task.domain, task.problem]) class FastDownwardExperiment(Experiment): @@ -204,8 +225,8 @@ def add_suite(self, benchmarks_dir, suite): """ if isinstance(suite, str): suite = [suite] - benchmarks_dir = os.path.abspath(benchmarks_dir) - if not os.path.exists(benchmarks_dir): + benchmarks_dir = Path(benchmarks_dir).resolve() + if not benchmarks_dir.is_dir(): logging.critical(f"Benchmarks directory {benchmarks_dir} not found.") self._suites[benchmarks_dir].extend(suite) @@ -300,9 +321,9 @@ def add_algorithm( "--overall-memory-limit", "3584M", ] + (driver_options or []) - algorithm = _DownwardAlgorithm( + algorithm = FastDownwardAlgorithm( name, - CachedFastDownwardRevision(repo, rev, build_options), + CachedFastDownwardRevision(self.revision_cache, repo, rev, build_options), driver_options, component_options, ) @@ -322,12 +343,12 @@ def build(self, **kwargs): if not self._algorithms: logging.critical("You must add at least one algorithm.") - # We convert the problems in suites to strings to avoid errors when converting - # properties to JSON later. The clean but more complex solution would be to add - # a method to the JSONEncoder that recognizes and correctly serializes the class - # Problem. + # We convert the problems in suites to strings to avoid errors when + # converting properties to JSON later. The clean but more complex + # solution would be to add a method to the JSONEncoder that recognizes + # and correctly serializes the Path and Task classes. serialized_suites = { - benchmarks_dir: [str(problem) for problem in benchmarks] + str(benchmarks_dir): [str(problem) for problem in benchmarks] for benchmarks_dir, benchmarks in self._suites.items() } self.set_property("suite", serialized_suites) @@ -347,20 +368,13 @@ def _get_unique_cached_revisions(self): def _cache_revisions(self): for cached_rev in self._get_unique_cached_revisions(): - cached_rev.cache(self.revision_cache) + cached_rev.cache() def _add_code(self): """Add the compiled code to the experiment.""" for cached_rev in self._get_unique_cached_revisions(): - cache_path = os.path.join(self.revision_cache, cached_rev.name) - dest_path = "code-" + cached_rev.name - self.add_resource("", cache_path, dest_path) - # Overwrite the script to set an environment variable. - self.add_resource( - _get_solver_resource_name(cached_rev), - os.path.join(cache_path, "fast-downward.py"), - os.path.join(dest_path, "fast-downward.py"), - ) + dest_path = cached_rev.get_relative_exp_path() + self.add_resource("", cached_rev.path, dest_path) def _add_runs(self): tasks = self._get_tasks() diff --git a/downward/suites.py b/downward/suites.py index 39156726c..12ab08a6f 100644 --- a/downward/suites.py +++ b/downward/suites.py @@ -1,9 +1,9 @@ -import os +from pathlib import Path from lab import tools -def find_domain_file(benchmarks_dir, domain, problem): +def find_domain_file(benchmarks_dir, domain: str, problem: str): """Search for domain file in the directory *benchmarks_dir*/*domain*. For a given problem filename ".", check the following @@ -14,7 +14,8 @@ def find_domain_file(benchmarks_dir, domain, problem): and psr-small domains, where problem file names are p01-xxx.pddl and domain file names are p01-domain.pddl. """ - problem_root, ext = os.path.splitext(problem) + problem_root = Path(problem).stem + ext = Path(problem).suffix domain_basenames = [ "domain.pddl", problem_root + "-domain" + ext, @@ -22,33 +23,31 @@ def find_domain_file(benchmarks_dir, domain, problem): "domain_" + problem, "domain-" + problem, ] - domain_dir = os.path.join(benchmarks_dir, domain) + domain_dir = Path(benchmarks_dir) / domain return tools.find_file(domain_basenames, domain_dir) -def get_problem(benchmarks_dir, domain_name, problem_name): - problem_file = os.path.join(benchmarks_dir, domain_name, problem_name) +def get_task(benchmarks_dir, domain: str, problem: str): + problem_file = Path(benchmarks_dir) / domain / problem domain_file = None - if problem_file.endswith(".pddl"): - domain_file = find_domain_file(benchmarks_dir, domain_name, problem_name) - return Problem( - domain_name, problem_name, problem_file=problem_file, domain_file=domain_file - ) + if problem_file.suffix == ".pddl": + domain_file = find_domain_file(benchmarks_dir, domain, problem) + return Task(domain, problem, problem_file=problem_file, domain_file=domain_file) class Domain: - def __init__(self, benchmarks_dir, domain): + def __init__(self, benchmarks_dir, domain: str): self.domain = domain - directory = os.path.join(benchmarks_dir, domain) + directory = Path(benchmarks_dir) / domain problem_files = tools.natural_sort( [ - p - for p in os.listdir(directory) - if "domain" not in p and p.endswith((".pddl", ".sas")) + p.name + for p in directory.iterdir() + if "domain" not in p.name and p.suffix in {".pddl", ".sas"} ] ) self.problems = [ - get_problem(benchmarks_dir, domain, problem) for problem in problem_files + get_task(benchmarks_dir, domain, problem) for problem in problem_files ] def __str__(self): @@ -67,9 +66,9 @@ def __iter__(self): return iter(self.problems) -class Problem: +class Task: def __init__( - self, domain, problem, problem_file, domain_file=None, properties=None + self, domain: str, problem: str, problem_file, domain_file=None, properties=None ): """ *domain* and *problem* are the display names of the domain and @@ -81,17 +80,13 @@ def __init__( added to the properties file of each run that uses this problem. :: - suite = [ - Problem('gripper-original', 'prob01.pddl', - problem_file='/path/to/original/problem.pddl', - domain_file='/path/to/original/domain.pddl', - properties={'relaxed': False}), - Problem('gripper-relaxed', 'prob01.pddl', - problem_file='/path/to/relaxed/problem.pddl', - domain_file='/path/to/relaxed/domain.pddl', - properties={'relaxed': True}), - Problem('gripper', 'prob01.pddl', '/path/to/prob01.pddl') - ] + >>> task = Task( + ... "gripper", + ... "p01.pddl", + ... problem_file="/path/to/prob01.pddl", + ... domain_file="/path/to/domain.pddl", + ... properties={"relaxed": False}, + ... ) """ self.domain = domain self.problem = problem @@ -104,8 +99,8 @@ def __init__( def __str__(self): return ( - f"" + f"" ) @@ -114,19 +109,22 @@ def _generate_problems(benchmarks_dir, description): Descriptions are either domains (e.g., "gripper") or problems (e.g., "gripper:prob01.pddl"). """ - if isinstance(description, Problem): + if isinstance(description, Task): yield description elif isinstance(description, Domain): yield from description elif ":" in description: domain_name, problem_name = description.split(":", 1) - yield get_problem(benchmarks_dir, domain_name, problem_name) + yield get_task(benchmarks_dir, domain_name, problem_name) else: yield from Domain(benchmarks_dir, description) def build_suite(benchmarks_dir, descriptions): - """ + """Compute a list of :class:`Task ` objects. + + The path *benchmarks_dir* must contain a subdir for each domain. + *descriptions* must be a list of domain or problem descriptions:: build_suite(benchmarks_dir, ["gripper", "grid:prob01.pddl"]) diff --git a/examples/downward/01-evaluation.py b/examples/downward/01-evaluation.py index 0683a5829..b259d9d77 100755 --- a/examples/downward/01-evaluation.py +++ b/examples/downward/01-evaluation.py @@ -24,7 +24,7 @@ "remove-combined-properties", project.remove_file, Path(exp.eval_dir) / "properties" ) -project.fetch_algorithm(exp, "2020-09-11-A-cg-vs-ff", "20.06:01-cg", new_algo="cg") +project.fetch_algorithm(exp, "2020-09-11-A-cg-vs-ff", "01-cg", new_algo="cg") project.fetch_algorithms(exp, "2020-09-11-B-bounded-cost") filters = [project.add_evaluations_per_time] diff --git a/examples/downward/2020-09-11-A-cg-vs-ff.py b/examples/downward/2020-09-11-A-cg-vs-ff.py index 0fd007bac..929f990fd 100755 --- a/examples/downward/2020-09-11-A-cg-vs-ff.py +++ b/examples/downward/2020-09-11-A-cg-vs-ff.py @@ -10,7 +10,7 @@ BENCHMARKS_DIR = os.environ["DOWNWARD_BENCHMARKS"] SCP_LOGIN = "myname@myserver.com" REMOTE_REPOS_DIR = "/infai/seipp/projects" -# If REVISION_CACHE is None, the default ./data/revision-cache is used. +# If REVISION_CACHE is None, the default "./data/revision-cache/" is used. REVISION_CACHE = os.environ.get("DOWNWARD_REVISION_CACHE") if project.REMOTE: SUITE = project.SUITE_SATISFICING @@ -31,8 +31,8 @@ ] BUILD_OPTIONS = [] DRIVER_OPTIONS = ["--overall-time-limit", "5m"] -REVS = [ - ("main", "main"), +REV_NICKS = [ + ("main", ""), ] ATTRIBUTES = [ "error", @@ -49,7 +49,7 @@ exp = project.FastDownwardExperiment(environment=ENV, revision_cache=REVISION_CACHE) for config_nick, config in CONFIGS: - for rev, rev_nick in REVS: + for rev, rev_nick in REV_NICKS: algo_name = f"{rev_nick}:{config_nick}" if rev_nick else config_nick exp.add_algorithm( algo_name, @@ -81,7 +81,7 @@ attributes = ["expansions"] pairs = [ - ("20.06:01-cg", "20.06:02-ff"), + ("01-cg", "02-ff"), ] suffix = "-rel" if project.RELATIVE else "" for algo1, algo2 in pairs: diff --git a/examples/downward/2020-09-11-B-bounded-cost.py b/examples/downward/2020-09-11-B-bounded-cost.py index 260c3ec6f..cdc26adf2 100755 --- a/examples/downward/2020-09-11-B-bounded-cost.py +++ b/examples/downward/2020-09-11-B-bounded-cost.py @@ -2,17 +2,12 @@ import json import os -from pathlib import Path import shutil from downward import suites from downward.cached_revision import CachedFastDownwardRevision -from downward.experiment import ( - _DownwardAlgorithm, - _get_solver_resource_name, - FastDownwardRun, -) -from lab.experiment import Experiment, get_default_data_dir +from downward.experiment import FastDownwardAlgorithm, FastDownwardRun +from lab.experiment import Experiment import project @@ -23,10 +18,9 @@ REMOTE_REPOS_DIR = "/infai/seipp/projects" BOUNDS_FILE = "bounds.json" SUITE = ["depot:p01.pddl", "grid:prob01.pddl", "gripper:prob01.pddl"] -try: - REVISION_CACHE = Path(os.environ["DOWNWARD_REVISION_CACHE"]) -except KeyError: - REVISION_CACHE = Path(get_default_data_dir()) / "revision-cache" +REVISION_CACHE = ( + os.environ.get("DOWNWARD_REVISION_CACHE") or project.DIR / "data" / "revision-cache" +) if project.REMOTE: # ENV = project.BaselSlurmEnvironment(email="my.name@myhost.ch") ENV = project.TetralithEnvironment( @@ -47,9 +41,9 @@ "--overall-memory-limit", "3584M", ] -# Pairs of revision identifier and revision nick. -REVS = [ - ("main", "main"), +# Pairs of revision identifier and optional revision nick. +REV_NICKS = [ + ("main", ""), ] ATTRIBUTES = [ "error", @@ -65,18 +59,10 @@ ] exp = Experiment(environment=ENV) -for rev, rev_nick in REVS: - cached_rev = CachedFastDownwardRevision(REPO, rev, BUILD_OPTIONS) - cached_rev.cache(REVISION_CACHE) - cache_path = REVISION_CACHE / cached_rev.name - dest_path = Path(f"code-{cached_rev.name}") - exp.add_resource("", cache_path, dest_path) - # Overwrite the script to set an environment variable. - exp.add_resource( - _get_solver_resource_name(cached_rev), - cache_path / "fast-downward.py", - dest_path / "fast-downward.py", - ) +for rev, rev_nick in REV_NICKS: + cached_rev = CachedFastDownwardRevision(REVISION_CACHE, REPO, rev, BUILD_OPTIONS) + cached_rev.cache() + exp.add_resource("", cached_rev.path, cached_rev.get_relative_exp_path()) for config_nick, config in CONFIGS: algo_name = f"{rev_nick}-{config_nick}" if rev_nick else config_nick @@ -91,7 +77,7 @@ config_with_bound[-1] = config_with_bound[-1].replace( "bound=BOUND", f"bound={upper_bound}" ) - algo = _DownwardAlgorithm( + algo = FastDownwardAlgorithm( algo_name, cached_rev, DRIVER_OPTIONS, diff --git a/examples/downward/project.py b/examples/downward/project.py index bfb366344..b38c9ea49 100644 --- a/examples/downward/project.py +++ b/examples/downward/project.py @@ -8,7 +8,6 @@ from downward.reports.absolute import AbsoluteReport from downward.reports.scatter import ScatterPlotReport from downward.reports.taskwise import TaskwiseReport -from lab import tools from lab.environments import ( BaselSlurmEnvironment, LocalEnvironment, @@ -30,6 +29,8 @@ DIR = Path(__file__).resolve().parent +SCRIPT = Path(sys.argv[0]).resolve() + NODE = platform.node() # Cover both the Basel and Linköping clusters for simplicity. REMOTE = NODE.endswith((".scicore.unibas.ch", ".cluster.bc2.ch")) or re.match( @@ -231,7 +232,7 @@ def get_repo_base() -> Path: directory with a subdirectory named ".git" is found. Abort if the repo base cannot be found.""" - path = Path(tools.get_script_path()) + path = Path(SCRIPT) while path.parent != path: if (path / ".git").is_dir(): return path @@ -256,7 +257,7 @@ def add_evaluations_per_time(run): def _get_exp_dir_relative_to_repo(): repo_name = get_repo_base().name - script = Path(tools.get_script_path()) + script = Path(SCRIPT) script_dir = script.parent rel_script_dir = script_dir.relative_to(get_repo_base()) expname = script.stem diff --git a/examples/lmcut.py b/examples/lmcut.py index bc65d0917..32b240ec8 100755 --- a/examples/lmcut.py +++ b/examples/lmcut.py @@ -1,6 +1,10 @@ #! /usr/bin/env python -"""Solve some tasks with A* and the LM-Cut heuristic.""" +"""Basic experiment that solves some tasks with A* and the LM-Cut heuristic. + +For more realistic Fast Downward experiments, see the ``examples/downward`` +directory. +""" import os import os.path @@ -25,7 +29,7 @@ # Use path to your Fast Downward repository. REPO = os.environ["DOWNWARD_REPO"] BENCHMARKS_DIR = os.environ["DOWNWARD_BENCHMARKS"] -# If REVISION_CACHE is None, the default ./data/revision-cache is used. +# If REVISION_CACHE is None, the default "./data/revision-cache/" is used. REVISION_CACHE = os.environ.get("DOWNWARD_REVISION_CACHE") REV = "main" diff --git a/examples/report-external-results.py b/examples/report-external-results.py index 109e366dc..a6b08b3be 100755 --- a/examples/report-external-results.py +++ b/examples/report-external-results.py @@ -15,10 +15,9 @@ """ import json -import os.path +from pathlib import Path from downward.reports.absolute import AbsoluteReport -from lab import tools from lab.experiment import Experiment @@ -41,8 +40,9 @@ def write_properties(eval_dir): - tools.makedirs(eval_dir) - with open(os.path.join(eval_dir, "properties"), "w") as f: + eval_dir = Path(eval_dir) + eval_dir.mkdir(parents=True, exist_ok=True) + with open(eval_dir / "properties", "w") as f: json.dump(PROPERTIES, f) diff --git a/examples/singularity/singularity-exp.py b/examples/singularity/singularity-exp.py index 118daf916..27ce5e7e6 100755 --- a/examples/singularity/singularity-exp.py +++ b/examples/singularity/singularity-exp.py @@ -1,21 +1,28 @@ #! /usr/bin/env python """ -Example experiment for running Singularity planner images. - -The time and memory limits set with Lab can be circumvented by solvers -that fork child processes. Their resource usage is not checked. If you're -running solvers that don't check their resource usage like Fast Downward, -we recommend using cgroups or the "runsolver" tool to enforce resource -limits. Since setting time limits for solvers with cgroups is difficult, -the experiment below uses the "runsolver" tool, which has been used in -multiple SAT competitions to enforce resource limits. For the experiment -to run, the runsolver binary needs to be on the PATH. You can obtain a -runsolver copy from https://github.com/jendrikseipp/runsolver. - -A note on running Singularity on clusters: reading large Singularity files -over the network is not optimal, so we recommend copying the images to a -local filesystem (e.g., /tmp/) before running experiments. +Example experiment for running Singularity/Apptainer planner images. + +The time and memory limits set with Lab can be circumvented by solvers that fork +child processes. Their resource usage is not checked. If you're running solvers +that don't check their resource usage like Fast Downward, we recommend using +cgroups or the "runsolver" tool to enforce resource limits. Since setting time +limits for solvers with cgroups is difficult, the experiment below uses the +``runsolver`` tool, which has been used in multiple SAT competitions to enforce +resource limits. For the experiment to run, the runsolver binary needs to be on +the PATH. You can obtain a runsolver copy from +https://github.com/jendrikseipp/runsolver. + +Since Singularity (and Apptainer) reserve 1-2 GiB of *virtual* memory when +starting the container, we recommend either enforcing a higher virtual memory +limit with ``runsolver`` or limiting RSS memory with ``runsolver`` (like below). +For limiting RSS memory, you can also use `runlim +`_, which is more actively maintained than +runsolver. + +A note on running Singularity on clusters: reading large Singularity files over +the network is not optimal, so we recommend copying the images to a local +filesystem (e.g., /tmp/) before running experiments. """ import os @@ -113,13 +120,13 @@ def get_image(name): "run-planner", [ "runsolver", - "-C", + "--cpu-limit", TIME_LIMIT, - "-V", + "--rss-swap-limit", MEMORY_LIMIT, - "-w", + "--watcher-data", "watch.log", - "-v", + "--var", "values.log", "{run_singularity}", f"{{{planner}}}", diff --git a/lab/__init__.py b/lab/__init__.py index 1f451fafe..b2598df12 100644 --- a/lab/__init__.py +++ b/lab/__init__.py @@ -1,2 +1,2 @@ #: Lab version number. A "+" is appended to all non-tagged revisions. -__version__ = "7.3+" +__version__ = "7.4+" diff --git a/lab/cached_revision.py b/lab/cached_revision.py index 8c5992f79..f13bac353 100644 --- a/lab/cached_revision.py +++ b/lab/cached_revision.py @@ -2,6 +2,7 @@ import hashlib import logging import os.path +from pathlib import Path import shutil import subprocess import tarfile @@ -45,25 +46,20 @@ def _compute_md5_hash(mylist): class CachedRevision: - """This class represents checkouts of a solver. + """Cache compiled revisions of a solver for quick reuse.""" - It provides methods for compiling and caching given revisions. - - .. warning:: - - The API for this class is experimental and subject to change. - Feedback is welcome! - """ - - def __init__(self, repo, rev, build_cmd, exclude=None): + def __init__(self, revision_cache, repo, rev, build_cmd, exclude=None, subdir=""): """ + * *revision_cache*: path to revision cache directory. * *repo*: path to solver repository. * *rev*: solver revision. * *build_cmd*: list with build script and any build options - (e.g., ``["./build.py", "release"]``, ``["make"]``). - * *exclude*: list of paths in repo that are not needed for building - and running the solver. Instead of this parameter, you can also + (e.g., ``["./build.py", "release"]``, ``["make"]``). Will be executed under + *subdir*. + * *exclude*: list of relative paths under *subdir* that are not needed for + building and running the solver. Instead of this parameter, you can also use a ``.gitattributes`` file for Git repositories. + * *subdir*: relative path from *repo* to solver subdir. The following example caches a Fast Downward revision. When you use the :class:`FastDownwardExperiment @@ -76,28 +72,33 @@ def __init__(self, repo, rev, build_cmd, exclude=None): >>> revision_cache = os.environ.get("DOWNWARD_REVISION_CACHE") >>> if revision_cache: ... rev = "main" - ... cr = CachedRevision(repo, rev, ["./build.py"], exclude=["experiments"]) - ... # cr.cache(revision_cache) # Uncomment to actually cache the code. + ... cr = CachedRevision( + ... revision_cache, repo, rev, ["./build.py"], exclude=["experiments"] + ... ) + ... # cr.cache() # Uncomment to actually cache the code. ... - You can now copy the cached repo to your experiment: + You can now copy the cached repo to your experiment:: ... from lab.experiment import Experiment ... exp = Experiment() - ... cache_path = os.path.join(revision_cache, cr.name) - ... dest_path = os.path.join(exp.path, "code-" + cr.name) - ... exp.add_resource("solver_" + cr.name, cache_path, dest_path) + ... dest_path = os.path.join(exp.path, f"code-{cr.name}") + ... exp.add_resource(f"solver_{cr.name}", cr.path, dest_path) """ if not os.path.isdir(repo): logging.critical(f"{repo} is not a local solver repository.") + self.revision_cache = Path(revision_cache) self.repo = repo + self.subdir = subdir self.build_cmd = build_cmd self.local_rev = rev self.global_rev = get_global_rev(repo, rev) - self.path = None self.exclude = exclude or [] self.name = self._compute_hashed_name() + self.path = self.revision_cache / self.name + # Temporary directory for preparing the checkout. + self._tmp_path = self.revision_cache / f"{self.name}-tmp" def __eq__(self, other): return self.name == other.name @@ -106,10 +107,11 @@ def __hash__(self): return hash(self.name) def _compute_hashed_name(self): - return f"{self.global_rev}_{_compute_md5_hash(self.build_cmd + self.exclude)}" + options_hash = _compute_md5_hash(self.build_cmd + self.exclude + [self.subdir]) + return f"{self.global_rev}_{options_hash}" - def cache(self, revision_cache): - self.path = os.path.join(revision_cache, self.name) + def cache(self): + """Check out the solver revision to *self.path* and compile the solver.""" if os.path.exists(self.path): logging.info(f'Revision is already cached: "{self.path}"') if not os.path.exists(self._get_sentinel_file()): @@ -118,37 +120,52 @@ def cache(self, revision_cache): f"Please delete it and try again." ) else: - tools.makedirs(self.path) - tar_archive = os.path.join(self.path, "solver.tgz") + tools.remove_path(self._tmp_path) + tools.makedirs(self._tmp_path) + tar_archive = os.path.join(self._tmp_path, "solver.tgz") cmd = ["git", "archive", "--format", "tar", self.global_rev] with open(tar_archive, "w") as f: retcode = tools.run_command(cmd, stdout=f, cwd=self.repo) - if retcode == 0: - with tarfile.open(tar_archive) as tf: - tf.extractall(self.path) - tools.remove_path(tar_archive) - - for exclude_dir in self.exclude: - path = os.path.join(self.path, exclude_dir) - if os.path.exists(path): - tools.remove_path(path) - if retcode != 0: - shutil.rmtree(self.path) logging.critical("Failed to make checkout.") - self._compile() + + # Extract only the subdir. + with tarfile.open(tar_archive) as tf: + + def members(): + for tarinfo in tf.getmembers(): + if tarinfo.name.startswith(self.subdir): + yield tarinfo + + tf.extractall(self._tmp_path, members=members()) + shutil.move(self._tmp_path / self.subdir, self.path) + tools.remove_path(tar_archive) + + for exclude_path in self.exclude: + path = self.path / exclude_path + if path.exists(): + tools.remove_path(path) + + retcode = tools.run_command(self.build_cmd, cwd=self.path) + if retcode == 0: + tools.write_file(self._get_sentinel_file(), "") + else: + logging.critical(f"Build failed in {self.path}") + self._cleanup() def _get_sentinel_file(self): return os.path.join(self.path, "build_successful") - def _compile(self): - retcode = tools.run_command(self.build_cmd, cwd=self.path) - if retcode == 0: - tools.write_file(self._get_sentinel_file(), "") - else: - logging.critical(f"Build failed in {self.path}") - def _cleanup(self): pass + + def get_relative_exp_path(self, relpath=""): + """Return a path relative to the experiment directory. + + Use this function to find out where files from the cache will be put in + the experiment directory. + + """ + return os.path.join(f"code-{self.name}", relpath) diff --git a/lab/experiment.py b/lab/experiment.py index 75fb5dd27..3b20264ad 100644 --- a/lab/experiment.py +++ b/lab/experiment.py @@ -136,7 +136,7 @@ def add_resource(self, name, source, dest="", symlink=False): Example:: >>> exp = Experiment() - >>> exp.add_resource("planner", "path/to/planner") + >>> exp.add_resource("planner", "path/to/my-planner") includes my-planner in the experiment directory. You can use ``{planner}`` to reference my-planner in a run's commands:: @@ -611,16 +611,18 @@ def build(self, write_to_disk=True): if not write_to_disk: return - logging.info(f'Experiment path: "{self.path}"') + logging.info(f'Experiment path: "{tools.get_relative_path(self.path)}"') self._remove_experiment_dir() tools.makedirs(self.path) - self.environment.write_main_script() - self._build_new_files() self._build_resources() self._build_runs() self._build_properties_file(STATIC_EXPERIMENT_PROPERTIES_FILENAME) + # The main script can need other experiment files and it adds new files + self.environment.write_main_script() + self._build_new_files() + def start_runs(self): """Execute all runs that were added to the experiment. diff --git a/lab/fetcher.py b/lab/fetcher.py index 07d3fa3c2..0c9e016c4 100644 --- a/lab/fetcher.py +++ b/lab/fetcher.py @@ -1,17 +1,17 @@ -from glob import glob import logging -import os +from pathlib import Path import sys from lab import tools import lab.experiment -def _check_eval_dir(eval_dir): - if os.path.exists(eval_dir): +def _check_eval_dir(eval_dir: Path): + if eval_dir.exists(): answer = ( input( - f"{eval_dir} already exists. Do you want to (o)verwrite it, " + f"{tools.get_relative_path(eval_dir)} already exists. " + f"Do you want to (o)verwrite it, " f"(m)erge the results, or (c)ancel? " ) .strip() @@ -44,33 +44,30 @@ class Fetcher: """ def fetch_dir(self, run_dir): + """Combine "static-properties" and "properties" from a run dir and return it.""" + run_dir = Path(run_dir) static_props = tools.Properties( - filename=os.path.join( - run_dir, lab.experiment.STATIC_RUN_PROPERTIES_FILENAME - ) + filename=run_dir / lab.experiment.STATIC_RUN_PROPERTIES_FILENAME ) - dynamic_props = tools.Properties(filename=os.path.join(run_dir, "properties")) + dynamic_props = tools.Properties(filename=run_dir / "properties") props = tools.Properties() props.update(static_props) props.update(dynamic_props) - driver_log = os.path.join(run_dir, "driver.log") - if not os.path.exists(driver_log): + driver_log = run_dir / "driver.log" + if not driver_log.exists(): props.add_unexplained_error( "driver.log is missing. Probably the run was never started." ) - driver_err = os.path.join(run_dir, "driver.err") - run_err = os.path.join(run_dir, "run.err") + driver_err = run_dir / "driver.err" + run_err = run_dir / "run.err" for logfile in [driver_err, run_err]: - if os.path.exists(logfile): - with open(logfile) as f: - content = f.read() + if logfile.exists(): + content = logfile.read_text() if content: - props.add_unexplained_error( - f"{os.path.basename(logfile)}: {content}" - ) + props.add_unexplained_error(f"{logfile.name}: {content}") return props def __call__(self, src_dir, eval_dir=None, merge=None, filter=None, **kwargs): @@ -86,12 +83,17 @@ def __call__(self, src_dir, eval_dir=None, merge=None, filter=None, **kwargs): description of the parameters. """ - if not os.path.isdir(src_dir): + src_dir = Path(src_dir) + if not src_dir.is_dir(): logging.critical(f"{src_dir} is missing or not a directory") run_filter = tools.RunFilter(filter, **kwargs) - eval_dir = eval_dir or src_dir.rstrip("/") + "-eval" - logging.info(f"Fetching properties from {src_dir} to {eval_dir}") + eval_dir = eval_dir or str(src_dir).rstrip("/") + "-eval" + eval_dir = Path(eval_dir) + logging.info( + f"Fetching properties from {tools.get_relative_path(src_dir)} " + f"to {tools.get_relative_path(eval_dir)}" + ) if merge is None: _check_eval_dir(eval_dir) @@ -102,13 +104,11 @@ def __call__(self, src_dir, eval_dir=None, merge=None, filter=None, **kwargs): tools.remove_path(eval_dir) # Load properties in the eval_dir if there are any already. - combined_props = tools.Properties(os.path.join(eval_dir, "properties")) - fetch_from_eval_dir = not os.path.exists( - os.path.join(src_dir, "runs-00001-00100") - ) + src_props_file = src_dir / "properties" + fetch_from_eval_dir = src_props_file.exists() + combined_props = tools.Properties(eval_dir / "properties") if fetch_from_eval_dir: - src_path = os.path.join(src_dir, "properties") - src_props = tools.Properties(filename=src_path) + src_props = tools.Properties(filename=src_props_file) if not src_props: logging.critical(f"No properties found in {src_dir}") run_filter.apply(src_props) @@ -124,17 +124,19 @@ def __call__(self, src_dir, eval_dir=None, merge=None, filter=None, **kwargs): logging.error("There was output to *-grid-steps/slurm.err") new_props = tools.Properties() - run_dirs = sorted(glob(os.path.join(src_dir, "runs-*-*", "*"))) - total_dirs = len(run_dirs) - logging.info(f"Scanning properties from {total_dirs:d} run directories") + run_dirs = sorted(src_dir.glob("runs-*-*/*")) + num_dirs = len(run_dirs) + logging.info(f"Collecting properties from {num_dirs:d} run directories") for index, run_dir in enumerate(run_dirs, start=1): - loglevel = logging.INFO if index % 100 == 0 else logging.DEBUG - logging.log(loglevel, f"Scanning: {index:6d}/{total_dirs:d}") props = self.fetch_dir(run_dir) if slurm_err_content: props.add_unexplained_error("output-to-slurm.err") id_string = "-".join(props["id"]) new_props[id_string] = props + loglevel = logging.INFO if index % 100 == 0 else logging.DEBUG + logging.log( + loglevel, f"Collected {index:6d}/{num_dirs} properties files" + ) run_filter.apply(new_props) combined_props.update(new_props) diff --git a/lab/tools.py b/lab/tools.py index 6a484b634..6370d7eb4 100644 --- a/lab/tools.py +++ b/lab/tools.py @@ -53,6 +53,17 @@ def get_lab_path(): return os.path.dirname(os.path.dirname(os.path.abspath(__file__))) +def get_relative_path(dest): + """ + Get relative path from cwd to *dest*. + + Return *dest* unchanged if it's not below cwd.""" + try: + return Path(dest).relative_to(Path.cwd()) + except ValueError: + return dest + + def get_python_executable(): return sys.executable or "python" @@ -151,16 +162,17 @@ def confirm_or_abort(question): def confirm_overwrite_or_abort(path): - confirm_or_abort(f'The path "{path}" already exists. Do you want to overwrite it?') + confirm_or_abort( + f'The path "{get_relative_path(path)}" already exists. ' + f"Do you want to overwrite it?" + ) def remove_path(path): - if os.path.isfile(path): - try: - os.remove(path) - except OSError: - pass - else: + path = Path(path) + if path.is_file(): + path.unlink() + elif path.is_dir(): shutil.rmtree(path) @@ -522,7 +534,7 @@ def get_unexplained_errors_message(run): def get_slurm_err_content(src_dir): - grid_steps_dir = src_dir.rstrip("/") + "-grid-steps" + grid_steps_dir = str(src_dir).rstrip("/") + "-grid-steps" slurm_err_filename = os.path.join(grid_steps_dir, "slurm.err") with open(slurm_err_filename) as f: return f.read() diff --git a/setup.py b/setup.py index daae36a17..d78a97531 100644 --- a/setup.py +++ b/setup.py @@ -1,6 +1,6 @@ #! /usr/bin/env python -from setuptools import setup +from setuptools import find_packages, setup from lab import __version__ as version @@ -20,7 +20,7 @@ author_email="jendrikseipp@gmail.com", url="https://github.com/aibasel/lab", license="GPL3+", - packages=["downward", "downward.reports", "lab", "lab.calls", "lab.reports"], + packages=find_packages("."), package_data={"downward": ["scripts/*.py"], "lab": ["data/*", "scripts/*.py"]}, classifiers=[ "Development Status :: 5 - Production/Stable", diff --git a/tests/run-downward-experiment b/tests/run-downward-experiment index 10a077de8..301cb25d1 100755 --- a/tests/run-downward-experiment +++ b/tests/run-downward-experiment @@ -7,9 +7,10 @@ DIR=$(realpath ${DIR}) REPO=$(dirname ${DIR}) EXPDIR=${DOWNWARD_REPO}/experiments/tmp-downward-lab-project -rm -rf ${EXPDIR}/data/2020-09-11-* +rm -rf ${EXPDIR}/data/ mkdir -p ${EXPDIR} cp ${REPO}/examples/downward/*.py ${EXPDIR} cp ${REPO}/examples/downward/bounds.json ${EXPDIR} -${DIR}/run-example-experiment ${EXPDIR}/2020-09-11-A-cg-vs-ff.py -${DIR}/run-example-experiment ${EXPDIR}/2020-09-11-B-bounded-cost.py +${DIR}/run-example-experiment ${EXPDIR}/2020-09-11-A-cg-vs-ff.py 1 2 3 6 9 +${DIR}/run-example-experiment ${EXPDIR}/2020-09-11-B-bounded-cost.py 1 2 3 6 +${DIR}/run-example-experiment ${EXPDIR}/01-evaluation.py 1 2 3 4 diff --git a/tests/run-example-experiment b/tests/run-example-experiment index 4805e1e9a..76e9f2e9f 100755 --- a/tests/run-example-experiment +++ b/tests/run-example-experiment @@ -4,11 +4,9 @@ set -euo pipefail check () { expname="$1" - if [[ "${expname}" == *"2020-09-11-"* ]]; then - "./${expname}.py" 1 2 3 6 - else - "./${expname}.py" --all - fi + shift 1 + options=("$@") + "./${expname}.py" "${options[@]}" properties="data/$expname-eval/properties" if [[ ! -f "$properties" ]]; then echo "File not found: $properties" @@ -25,6 +23,8 @@ check () { } SCRIPT="$1" +shift 1 # Forget first argument. +OPTIONS=("$@") # All remaining arguments are script options. DIR=$(dirname "$SCRIPT") FILENAME=$(basename "$SCRIPT") FILENAME="${FILENAME%.*}" @@ -34,4 +34,4 @@ cd $(dirname "$0")/../examples rm -rf "$DIR/data/$FILENAME" rm -rf "$DIR/data/$FILENAME-eval" pushd "$DIR" -check "$FILENAME" +check "$FILENAME" "${OPTIONS[@]}" diff --git a/tox.ini b/tox.ini index 7ac39b138..f7b11634a 100644 --- a/tox.ini +++ b/tox.ini @@ -1,5 +1,5 @@ [tox] -envlist = py37, py38, py310, py311, ff, singularity, style, docs # Skip py39 since it chokes on distutils. +envlist = py37, py38, py310, py311, downward, ff, singularity, style, docs # Skip py39 since it chokes on distutils. basepython = python3 skip_missing_interpreters = true @@ -8,10 +8,10 @@ deps = pytest commands = pytest --doctest-modules --ignore=downward/scripts downward lab tests examples/showcase-options.py - bash {toxinidir}/tests/run-example-experiment vertex-cover/exp.py - bash {toxinidir}/tests/run-example-experiment lmcut.py - bash {toxinidir}/tests/run-example-experiment showcase-options.py - bash {toxinidir}/tests/run-example-experiment report-external-results.py + bash {toxinidir}/tests/run-example-experiment vertex-cover/exp.py --all + bash {toxinidir}/tests/run-example-experiment lmcut.py --all + bash {toxinidir}/tests/run-example-experiment showcase-options.py --all + bash {toxinidir}/tests/run-example-experiment report-external-results.py --all passenv = CXX DOWNWARD_BENCHMARKS @@ -19,10 +19,12 @@ passenv = DOWNWARD_REVISION_CACHE allowlist_externals = bash +package = wheel +wheel_build_env = .pkg [testenv:ff] commands = - bash {toxinidir}/tests/run-example-experiment ff/ff.py + bash {toxinidir}/tests/run-example-experiment ff/ff.py --all passenv = DOWNWARD_BENCHMARKS allowlist_externals = @@ -40,7 +42,7 @@ allowlist_externals = [testenv:singularity] commands = - bash {toxinidir}/tests/run-example-experiment singularity/singularity-exp.py + bash {toxinidir}/tests/run-example-experiment singularity/singularity-exp.py --all passenv = DOWNWARD_BENCHMARKS SINGULARITY_IMAGES