Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SMT-RAT submission #60

Merged
merged 5 commits into from
Jun 18, 2024
Merged

SMT-RAT submission #60

merged 5 commits into from
Jun 18, 2024

Conversation

ValentinPromies
Copy link
Contributor

No description provided.

Copy link

github-actions bot commented May 27, 2024

Summary of modified submissions

SMT-RAT

@bobot bobot added the submission Submissions for SMT-COMP label May 28, 2024
martinjonas pushed a commit that referenced this pull request Jun 10, 2024
#84: Create cvc5-cloud
#74: Draft STP submission
#70: draft yicesQS submission
#68: Create STP-CNFLS
#66: Yices2 SMTCOMP 2024 Submission
#65: Z3-alpha draft PR
#64: Solver submission: cvc5
#63: submission iProver
#61: OSTRICH 1.4
#60: SMT-RAT submission
#57: Amaya's submission for SMT-COMP 2024
#55: plat-smt submission
#54: Add 2024 Bitwuzla submission.
#53: 2024 solver participant submission: OpenSMT
#52: Z3-Noodler submission
#51: Submission Colibri
#45: Submission for smtinterpol
#42: Adding Algaroba to SMTCOMP 2024
@martinjonas
Copy link
Contributor

@ValentinPromies We have executed the latest version of SMT-RAT on a randomly chosen subset of 20 single-query benchmarks from each logic where it participates. The benchmarks are also scrambled by the competition scrambler (with seed 1). You can find the results here: https://www.fi.muni.cz/~xjonas/smtcomp/smtrat.table.html#/table

Quick explanation:

  • Green status means that the result agrees with the (set-info :status _) annotation from the benchmark.
  • Blue status means that the benchmark has annotation (set-info :status unknown).
  • By clicking on the result (e.g. false, true, ABORTED, …) you can see the command-line arguments with which your solver was called and its output on the benchmark.
  • By clicking on the benchmark name (i.e., *scrambled*.yml), you can see the details of the benchmark including its contents (by clicking on the file link in input_files) and the name of the original bennchmark before scrambling (e.g., # original_files: 'non-incremental/AUFBVFP/20210301-Alive2-partial-undef/ph7/583_ph7.smt2').

Please check whether there are some discrepancies, such as missing/extra logics, unexpected aborts or unknowns, and similar. If you update the solver, let me know and I can execute further test runs. We still have plenty of time for several follow-up test runs.

@martinjonas
Copy link
Contributor

@ValentinPromies We have finished test runs of model-validation track. You can find the results here:

As before, you can click on the status of the benchmark to see the output of your solver. If you find any discrepancies or extra/missing logics, please let me know.

Note that we selected only SAT benchmarks for model validation. As a result of that, some logics do not contain any benchmarks. So do not be surprised if you have subscribed to one of these logics and you do not have any result for it. In particular, the logics are: QF_UFFP, QF_UFBVDT, QF_UFDTNIA, QF_NIRA.

@ValentinPromies
Copy link
Contributor Author

@martinjonas we updated our solver. Can you please run the tests again? Thanks for the help!

@martinjonas
Copy link
Contributor

@martinjonas we updated our solver. Can you please run the tests again? Thanks for the help!

Sure, done! You can find the new results on the same webpage as before. I reran both single-query and model-validation tracks.

@martinjonas martinjonas merged commit 0e0c955 into SMT-COMP:master Jun 18, 2024
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
submission Submissions for SMT-COMP
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants