Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Z3-Noodler submission #52

Merged
merged 5 commits into from
Jun 18, 2024
Merged

Z3-Noodler submission #52

merged 5 commits into from
Jun 18, 2024

Conversation

jurajsic
Copy link
Contributor

No description provided.

Copy link

github-actions bot commented May 22, 2024

Summary of modified submissions
Z3-Noodler
├── 5 authors
├── website: https://github.com/VeriFIT/z3-noodler
└── Participations
    └── SingleQuery
        └── QF_Strings
            └── all

@vhavlena
Copy link

@bobot We are not pretty sure how to exactly prepare the dockerfile. Is it supposed to be run as docker run <image>? How is it related to the command field in the json?

@bobot bobot added the submission Submissions for SMT-COMP label May 26, 2024
@mbromber
Copy link
Contributor

We are really sorry, but we made an error in the last iteration of the rules. The requirement that the submission is a Dockerfile was an idea that we forgot to remove. The submitted file must be an archive that contains the executable precompiled (statically linked is preferable). It will be executed on a computer that has the same installation as the docker image: registry.gitlab.com/sosy-lab/benchmarking/competition-scripts/user:latest
For more details on the image see https://gitlab.com/sosy-lab/benchmarking/competition-scripts/
#computing-environment-on-competition-machines.

PS: Not sure if you are on one of the following mailing lists. If not, we recommend you to join at least the smt-announce group. We sometimes post updates on the competition there including the above information.

[email protected]
[email protected]
[email protected]

],
"contacts": ["Lukáš Holík <[email protected]>"],
"archive": {
"url": "https://drive.google.com/file/d/1XSj2PiVJLDx-JQyJRt76OEloC0dWFJqH/view?usp=sharing"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jurajsic Thanks for submitting Z3-Noodler to SMT-COMP! The link to the archive should point directly to the archive, not to the google drive page from which it can be downloaded (we are downloading the archive automatically by a script). Could you change it in the pull request? In your case, it would be something like "url": "https://drive.google.com/uc?export=download&id=1XSj2PiVJLDx-JQyJRt76OEloC0dWFJqH".

I fixed that locally and tried running the current submission on our competition infrastructure with some trivial benchmarks. Everything else seems to be working so far!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should be fixed now.

martinjonas pushed a commit that referenced this pull request Jun 10, 2024
#84: Create cvc5-cloud
#74: Draft STP submission
#70: draft yicesQS submission
#68: Create STP-CNFLS
#66: Yices2 SMTCOMP 2024 Submission
#65: Z3-alpha draft PR
#64: Solver submission: cvc5
#63: submission iProver
#61: OSTRICH 1.4
#60: SMT-RAT submission
#57: Amaya's submission for SMT-COMP 2024
#55: plat-smt submission
#54: Add 2024 Bitwuzla submission.
#53: 2024 solver participant submission: OpenSMT
#52: Z3-Noodler submission
#51: Submission Colibri
#45: Submission for smtinterpol
#42: Adding Algaroba to SMTCOMP 2024
@martinjonas
Copy link
Contributor

@jurajsic We have executed the latest version of Z3-Noodler on a randomly chosen subset of 20 single-query benchmarks from each logic where it participates. The benchmarks are also scrambled by the competition scrambler (with seed 1). You can find the results here: https://www.fi.muni.cz/~xjonas/smtcomp/z3noodler.table.html#/table

Quick explanation:

  • Green status means that the result agrees with the (set-info :status _) annotation from the benchmark.
  • Blue status means that the benchmark has annotation (set-info :status unknown).
  • By clicking on the result (e.g. false, true, ABORTED, …) you can see the command-line arguments with which your solver was called and its output on the benchmark.
  • By clicking on the benchmark name (i.e., *scrambled*.yml), you can see the details of the benchmark including its contents (by clicking on the file link in input_files) and the name of the original bennchmark before scrambling (e.g., # original_files: 'non-incremental/AUFBVFP/20210301-Alive2-partial-undef/ph7/583_ph7.smt2').

Please check whether there are some discrepancies, such as missing/extra logics, unexpected aborts or unknowns, and similar. If you update the solver, let me know and I can execute further test runs. We still have plenty of time for several follow-up test runs.

@martinjonas martinjonas merged commit be94528 into SMT-COMP:master Jun 18, 2024
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
submission Submissions for SMT-COMP
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants