Skip to content
This repository has been archived by the owner on Aug 6, 2023. It is now read-only.

Latest commit

 

History

History
120 lines (95 loc) · 5.98 KB

Readme.md

File metadata and controls

120 lines (95 loc) · 5.98 KB

A benchmark for measuring overhead of runtime enforcement using multi-traces

This artifact contains experiments aimed at measuring the overhead of a BeepBeep processor pipeline to perform runtime enforcement of security policies using the concept of "multi-trace".

The goal of this benchmark is to perform an experimental evaluation of the proposed implementation, by measuring the execution time and memory consumption of the enforcement pipeline on various scenarios. The data produced by this lab is discussed in the following research article:

R. Taleb, S. Hallé, R. Khoury. (2021). A Modular Runtime Enforcement Model
using Multi-Traces. 14th International Symposium on Foundations & Practice
of Security, Springer LNCS 13291. DOI and page numbers not yet available at
the moment of submitting the artifact (2022-02-06).

This JAR file contains an instance of LabPal, an environment for running experiments on a computer and collecting their results in a user-friendly way. The author of this archive has set up a set of experiments, which typically involve running scripts on input data, processing their results and displaying them in tables and plots. LabPal is a library that wraps around these experiments and displays them in an easy-to-use web interface. The principle behind LabPal is that all the necessary code, libraries and input data should be bundled within a single self-contained JAR file, such that anyone can download and easily reproduce someone else's experiments.

All the plots and other data values mentioned in the paper are automatically generated by the execution of this lab. The lab also provides additional tables and plots that could not fit into the manuscript. Detailed instructions can be found on the LabPal website, [https://liflab.github.io/labpal]

The bundled files have been compiled with Java 11.

Running LabPal

To start the lab, open a terminal window and type at the command line:

java -jar multitrace-enforcement-lab.jar --autostart

You should see something like this:

LabPal 2.8 - A versatile environment for running experiments
(C) 2014-2017 Laboratoire d'informatique formelle
Université du Québec à Chicoutimi, Canada
Please visit http://localhost:21212/index to run this lab
Hit Ctrl+C in this window to stop

Open a web browser and type http://localhost:21212/index in the address bar. This should lead you to the main page of LabPal's web control panel.

Using the web interface

A detailed explanation on the use of the LabPal web interface can be found in this YouTube video: https://www.youtube.com/watch?v=5uL7i6SytyM. A lab is made of a set of experiments, each corresponding to a specific set of instructions that runs and generates a subset of all the benchmark's results. Results from experiments are collected and processed into various auto-generated tables and plots.

The lab is instructed to immediately start running all the expermients it contains. You can follow the progress of these experiments by going to the Status page and refreshing it periodically. At any point, you can look at the results of the experiments that have run so far. You can do so by:

  • Going to the Plots (5th button in the top menu) or the Tables (6th button) page and see the plots and tables created for this lab being updated in real time
  • Going back to the list of experiments, clicking on one of them and getting the detailed description and data points that this experiment has generated

Once the assistant is done, you can export any of the plots and tables to a file, or the raw data points by using the Export button in the Status page.

Comparing results from the paper

An interesting feature of LabPal, described in this other YouTube video (https://www.youtube.com/watch?v=StXflS52h4s), is that it exports its results directly into a research paper. If you look at the PDF of the paper, you will see that the plots and some other elements in the text are hyperlinks. These links can be used to fetch the corresponding plot or data element inside the running LabPal instance.

For example, locate Figure 4b and hover your mouse over it. You should see that this plot has a hyperlink with text "P54.0". Copy that link, and then go to the LabPal console in the browser and click on the "Find" button (rightmost button in the top bar). Paste the text "P54.0" in the search bar and click on "Find". You should be taken directly to the plot that corresponds to Figure 4b in the paper, and visually compare the two. (Make sure that the lab has finished running before making this comparison, otherwise what you will see is a partial plot with whatever results have been generated so far.)

You can check for other elements in the paper with similar hyperlinks. For example, in Table 1, the value "9824" in the top-right cell is also a hyperlink ("T1.12.5"). This corresponds to a value that was computed in the lab and inserted directly into the table. Search for T1.12.5 in the lab using the same technique as above; you should be taken to the Tables page, where the value that appears in the paper is highlighted (and hopefully, should be the same!). Clicking on this cell in the interface takes you to the experiment summary page where this value comes from.

All parts of the paper that refer to experimental data are linked to the lab in such a way, making it possible to cross-check all the claims referring to this data.

Inspecting source code

The source code for the lab and the source code of the underlying library that is being benchmarked are included within the JAR file and can be inspected. Within the archive's structure, this code is located in folders enforcementlab (benchmark) and ca/uqac/lif/cep/enforcement (for the library). The remaining packages within the JAR are dependencies that are not part of the paper's contribution; they are only present in precompiled form (.class files).

2022-02-06