Skip to content

Automatic testing on EC2

amnonh edited this page Nov 19, 2014 · 10 revisions

Both the testing framework and this page are under-work and both would be updated

EC2 Automatic Testing overview

The goal of the automatic testing is to allow with minimal effort to run tests and collect their results.

The testing process compose of running multiple instances, with one or more system under test (SUT) and one or more tester instances.

All instances should be run, installed, configure and run their app. The tester output should be collected and parsed to a meaningful results.

Testing framework

The testing framework is a set of scripts and configuration files. Currently (and when/if it would be change so would this page) there is one tester and one SUT, where the tester is started and configure manually.

Scripts

EC2 Management

ec2-tester.sh

The current main entry point. It runs on the tester machine and does the following:

  • Starts the SUT instance and propagate its user_data (see user data in the configuration section)
  • Runs the tester script (see tester)
  • Shutdown the SUT instance

Open question:

  • should it upload the raw/parsed data to s3
  • should it run the data parsing scripts

tester.py

ec2-utils.sh

A collection of ec2 commands

Installation and running

install.sh

On the tester machine a script that is run manually before the first test starts and install and configure the tester applications.

tester.py

A general utility to run tests. Given a directory the tester.py perform the following tasks:

  • Calculate the current run configuration based on configuration files and command line parameters (ie. SUT ip address)
  • Create executable scripts from templates (see configuration and templates)
  • Run the created scripts according to their steps number

The tester.py is also used as a utility program to retrieve a configuration value from the above combination of configuration files and command line parameters.

Statistic gathering

The results of running each test step are gathered in an output file. To get a relevant results processing the data is done in two steps. First, each test run results is parsed to create a jason object. Then the multiple objects are combine to get the average results.

Output specific memaslap2json.py

The memaslap2json.py scripts takes the memaslap output and creates a json objects from it:

hello
Clone this wiki locally