Skip to content
Zoltan Micskei edited this page Feb 1, 2020 · 22 revisions

Welcome to the Symbolic Execution-based Test Tool Evaluator (SETTE) wiki!

Overview

With the SETTE framework (not only symbolic execution-based) test input generators can be evaluated and compared. Our objective was to give a fine-grained feedback on several test generators. We collected the important features of imperative languages, implemented a set of code snippets (see sette-snippets), the test generators generated test inputs for them, and based on the results we evaluated the capabilities of the tools.

Evaluated tools

Currently, the following tools are supported by SETTE (completely automatic execution evaluation):

In addition, we have evaluated one tool manually:

Stucture of the wiki

If you are interested in SETTE, please see the general overview of the Approach, then head to the Install Instructions page and please follow the tutorial (How to Use SETTE). All the experiment results can be found in the sette-results repository. If you prefer a quick start, you might want to check out our screencast about the framework. If you have any questions or remarks, please feel free to contact us.

Further information and references

For an overview of the approach and the results see the following paper:

L. Cseppentő, Z. Micskei. "Evaluating code-based test input generator tools", Softw Test Verif Reliab. 2017. DOI: 10.1002/stvr.1627 (Author's PDF)

To cite the results or SETTE please cite the above paper. BibTeX snippet:

@article {cseppento-stvr-2017,
    title = {{Evaluating code-based test input generator tools}},
    author = {Cseppent\H{o}, Lajos and Micskei, Zolt{\'a}n},
    journal = {{Software Testing, Verification and Reliability}},
    pages = {1-24},
    year = {2017},
    volume = {27},
    issue = {6},
    doi = {10.1002/stvr.1627},
    publisher = {Wiley},
}

For more implementation details of the SETTE framework see Lajos Cseppentő's MSc thesis.