Skip to content

Commit

Permalink
Run make check
Browse files Browse the repository at this point in the history
    - ignore generated files
    - remove failing tests for conversion from old submission
  • Loading branch information
bobot committed May 14, 2024
1 parent 8ba0da9 commit 8c29706
Show file tree
Hide file tree
Showing 10 changed files with 33 additions and 36 deletions.
2 changes: 2 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,9 @@ repos:
- id: check-toml
- id: check-yaml
- id: end-of-file-fixer
exclude: web/content/solver_submission/schema.html
- id: trailing-whitespace
exclude: web/content/solver_submission/schema.html

# - repo: https://github.com/astral-sh/ruff-pre-commit
# # Ruff version.
Expand Down
1 change: 1 addition & 0 deletions .prettierignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
archive
_site
web/themes/smtcomp/layouts/
web/content/solver_submission/schema.html
6 changes: 3 additions & 3 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -15,14 +15,14 @@ check: ## Run code quality tools.
@echo "🚀 Static type checking: Running mypy"
@poetry run mypy
@echo "🚀 Checking for obsolete dependencies: Running deptry"
@poetry run deptry .
@poetry run deptry . --extend-exclude "archive|_site"

.PHONY: test
test: ## Test the code with pytest
@echo "🚀 Generating submissions/Readme.md"
@poetry run python3 submissions/template/generate_Readme.py generate
@echo "🚀 Testing code: Running pytest"
@poetry run pytest --cov --cov-config=pyproject.toml --cov-report=xml
@poetry run pytest

.PHONY: build
build: clean-build ## Build wheel file using poetry
Expand Down Expand Up @@ -51,4 +51,4 @@ submission-doc:
generate-schema-doc --expand-buttons --no-link-to-reused-ref $(GENERATED_SCHEMA_FILE) $(GENERATED_SCHEMA_HTML)

hugo-server:
(cd web; hugo server)
(cd web; hugo server)
12 changes: 3 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,8 @@ To enable the code coverage reports, see [here](https://fpgmaas.github.io/cookie
## Using the smtcomp tool for generating benchexec

#### Generate submissions [Optional]
The final solvers submitted during the smtcomp 2023 can be used:

The final solvers submitted during the smtcomp 2023 can be used:

```
smtcomp convert-csv tests/solvers_divisions_final.csv ../tmp/submissions
Expand All @@ -54,7 +55,7 @@ smtcomp download-archive ../tmp/submissions/*.json ../tmp/benchexec/execution
Trivial tests benchmarks generated with:

```
smtcomp generate-benchmarks ../tmp/benchexec/includes/
smtcomp generate-benchmarks ../tmp/benchexec/includes/
```

The benchexec tasks generated using:
Expand All @@ -81,13 +82,6 @@ Benchexec in verifier cloud started using:
(cd ../tmp/benchexec/execution; PATH_TO_BENCHEXEC/contrib/vcloud-benchmark.py ../includes/all.xml --read-only-dir / --overlay-dir /home --full-access-dir .. --numOfThreads 8 -M 2GB -c 1 --vcloudMaster VCLOUD_MASTER --vcloudClientHeap 500)
```








---

Repository initiated with [fpgmaas/cookiecutter-poetry](https://github.com/fpgmaas/cookiecutter-poetry).
12 changes: 10 additions & 2 deletions smtcomp/execution.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,25 +4,33 @@
import wget
from rich import print


def trace_executor_url() -> str:
return "https://github.com/SMT-COMP/trace-executor/releases/download/smtcomp2022/SMT-COMP-2022-trace-executor.tar.gz"
return (
"https://github.com/SMT-COMP/trace-executor/releases/download/smtcomp2022/SMT-COMP-2022-trace-executor.tar.gz"
)


def trace_executor_filename() -> str:
return "SMT-COMP-2022-trace-executor.tar.gz"


def download_trace_executor(dst: Path) -> None:
dst.mkdir(parents=True, exist_ok=True)
url = trace_executor_url()
wget.download(url, str(dst))
print("Download done")


def unpack_trace_executor(dst: Path) -> None:
filename = trace_executor_filename();
filename = trace_executor_filename()
extract_all_with_executable_permission(dst.joinpath(filename), dst)
print("Unpacking done")


import subprocess


def copy_tool_module(dst: Path) -> None:
script_path = Path(__file__).parent
subprocess.run(["cp", script_path / "tool.py", dst])
1 change: 1 addition & 0 deletions smtcomp/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -84,6 +84,7 @@ def prepare_execution(dst: Path) -> None:
execution.unpack_trace_executor(dst)
execution.copy_tool_module(dst)


@app.command()
def generate_benchexec(
files: List[Path],
Expand Down
2 changes: 1 addition & 1 deletion smtcomp/tool.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ class Tool(BaseTool2): # type: ignore
Generic tool for smtcomp execution
"""

REQUIRED_PATHS = ['unpack']
REQUIRED_PATHS = ["unpack"]

def determine_result(self, run: BaseTool2.Run) -> Any: # type: ignore
"""Adaptation of Jochen Hoenicke process script
Expand Down
8 changes: 0 additions & 8 deletions tests/test_validate.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,11 +32,3 @@ def test_bad_json(name: str) -> None:
@pytest.mark.parametrize("submission", submissions)
def test_submission(submission: str) -> None:
read(submission)


csv = ["tests/SMT-COMP 2023 System Registration.csv"]


@pytest.mark.parametrize("csv", csv)
def test_csv(csv: str, tmp_path: Path) -> None:
convert_csv(Path(csv), tmp_path)
23 changes: 11 additions & 12 deletions web/content/parallel_cloud/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,55 +7,54 @@ draft = false
## Parallel and Cloud Tracks

Like last year we are organising new, experimental, cloud and parallel tracks
for SMT-COMP. Similar tracks were organised in the SAT competition 2020
for SMT-COMP. Similar tracks were organised in the SAT competition 2020
and the competition had a positive impact on the development of parallel
SAT solvers (see <https://satcompetition.github.io/2020/>).

### The Concept

The goal in both the cloud and the parallel tracks is to measure the
success of a solver in solving a single, hard instance. This will be done by
success of a solver in solving a single, hard instance. This will be done by
giving solvers instances one at a time, similar to the SMT-COMP single-query
track. The participating solvers will be scored based on the number of
track. The participating solvers will be scored based on the number of
instances that a solver solves within the per-instance wall-clock time limit
and the total run time, similar to the single-query track’s parallel score.

For these tracks we need to choose in total 400 benchmarks from the
single-query track logics, and we are specifically reaching out to you, the
community and especially the competitors, for suggesting suitable instances to
be included in the tracks. In addition we will try to identify instances that
are considered interesting. All instances should come from the smt-lib
be included in the tracks. In addition we will try to identify instances that
are considered interesting. All instances should come from the smt-lib
benchmark library.

The solver submission rules follow those of the rest of the tracks. However,
The solver submission rules follow those of the rest of the tracks. However,
on this track we do accept portfolio solvers, as defined in the rules, as
competitors. We encourage submission of non-portfolio solvers and reserve the
competitors. We encourage submission of non-portfolio solvers and reserve the
right to give special mentions to non-portfolio solvers in reporting the
results.

While the standard competition will be run in StarExec, the parallel and cloud
tracks will be run on Amazon Web Services. AWS has kindly agreed to sponsor
tracks will be run on Amazon Web Services. AWS has kindly agreed to sponsor
the participants in the testing phase.

### Cloud track

The Cloud Track evaluates the effectiveness of parallel SMT solvers to run in a
distributed manner. The solvers participating in this track will be executed
distributed manner. The solvers participating in this track will be executed
with a wall-clock time limit of 20 minutes running on 100 m4.4xlarge machines
in parallel. Each m4.4xlarge machine has 16 virtual CPUs and 64 GB memory.
Communication between the machines is possible using MPI and SSH.

Participants of this track are required to submit their solver via a GitHub
repository (which can be private). The repository should contain a docker file
that compiles the solver. As an example, scripts for account configuration and
that compiles the solver. As an example, scripts for account configuration and
instructions to run HordeSAT in the default configuration are available at
<https://github.com/aws-samples/aws-batch-comp-infrastructure-sample>.


### Parallel track

The solvers participating in this track will be executed with a wall-clock time
limit of 20 minutes, thus similar to the Single Query Track. Each solver will
limit of 20 minutes, thus similar to the Single Query Track. Each solver will
be run on a single AWS machine of the type m4.16xlarge, which has 64 virtual
cores and 256GB of memory. More details about m4.16xlarge nodes can be found
[here](https://aws.amazon.com/about-aws/whats-new/2016/09/introducing-new-m4-instance-size-m4-16xlarge-and-new-region-availability-of-m4-instances/).
Expand Down
2 changes: 1 addition & 1 deletion web/hugo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -58,4 +58,4 @@ theme = 'smtcomp'
[[menu.year]]
name = 'Parallel & Cloud Tracks'
pageRef = 'parallel_cloud'
weight = 30
weight = 30

0 comments on commit 8c29706

Please sign in to comment.