This repository contains integration tests for the fabric8-analytics services.
The tests can be run against existing deployment, or locally via docker-compose.
Following environment variables can be used to test specific deployments:
F8A_API_URL
- API server URL
F8A_JOB_API_URL
- jobs service URL
By default the system running on localhost will be tested.
It starts and stops Bayesian multiple times, so is not currently containerised
itself - you need to suitably configure a Python environment on the host
system, and the user running the integration tests currently needs to be a
member of the docker
group (allowing execution of docker-compose
without sudo
). For further information on how to setup and use Docker as
a non-root user, read the detailed steps described
here.
Feature tests are written using behave. To
add new feature tests, simply edit an existing <name>.feature
file in
features/
(or create a new one) and fill in missing steps in
features/steps/common.py
(or create a new step file, where appropriate).
- Smoke tests: Smoke tests for checking if main API endpoints are available and work as expected
- Server API: API tests for the server module
- Jobs API: API tests for the jobs module
- Stack analysis v2: API tests for the
stack analysis endpoint
/api/v2/stack-analyses/
- Component analysis: API tests for the
component analysis endpoints under
/api/v1/component-analyses/
- Selfcheck: Some checks if the test steps are working correctly
- Stack analysis: API tests for the
stack analysis endpoint
/api/v1/stack-analyses/
- Known ecosystems: API tests for the
known ecosystems endpoint
/api/v1/ecosystems/
- Known packages: API tests for the
per-ecosystem known packages endpoints under
/api/v1/packages/
- Known versions: API tests for the
per-package known versions endpoints under
/api/v1/versions/
When adding a new feature file, also add it to feature_list.txt, as that determines the set of features executed by the runtest.sh script.
Documentation for the module with test steps is automatically generated into the file common.html. The available test steps are not currently documented yet, so refer to either the existing scenario definitions for usage examples, or else the step definitions in features/steps/common.py and the adjacent step files.
No additional changes are needed when adding a new test step file, as behave
will automatically check all Python files in the steps
directory for
step definitions.
Note that a single step definition can be shared amongst multiple steps by stacking decorators. For example::
@when('I wait {num:d} seconds')
@then('I wait {num:d} seconds')
def pause_scenario_execution(context, num):
time.sleep(num)
Allows client pauses to be inserted into both Then
and When
clauses
when defining a test scenario.
The behave
hooks in features/environment.py
and some of the common step definitions add a number of useful attributes
and methods to the behave
context.
The available methods include:
is_running()
: indicates whether or not the core API service is runningstart_system()
: Start the API service in its default configuration using Docker Composeteardown_system()
: Shut down the API service and remove all related container volumesrestart_system()
: Tears down and restarts the API service in its default configurationrun_command_in_service
: see features/environment.pyexec_command_in_container
: see features/environment.py
The available attributes include:
response
: a requests.Response instance containing the most recent response retrieved from the server API (steps making requests to the API should set this, steps checking responses from the server should query it)resource_manager
: a contextlib.ExitStack instance for registering resources to be cleaned up at the end up of the current test scenariodocker_compose_path
: a list of Docker compose files defining thedefault configuration
when running under Docker Compose
Due to the context lifecycle policies defined by behave
any changes to these
attributes in step definitions only remain in effect until the end of the
current scenario.
The host environment must be configured with docker-compose
, the behave
behaviour driven development testing framework, and a few other dependencies
for particular behavioural checks.
This can be handled as either a user level component installation::
$ pip install --user -r requirements.txt
Or else by setting up a Python virtual environment (either Python 2 or 3) and installing the necessary components::
$ pip install -r requirements.txt
The test suite is executed as follows::
$ ./runtest.sh <arguments>
Arguments passed to the test runner are passed through to the underlying
behave
invocation, so consult the behave
docs for the full list of
available flags.
Other custom configuration settings available:
-D dump_logs=true
(optional, default is not to print container logs) - requests display of container logs viadocker-compose logs
when at the end of each test scenario-D dump_errors=true
(optional, default is not to print container logs) - as fordump_logs
, but only dumps the logs for scenarios that fail.-D tail_logs=50
(optional, default is to print 50 lines) - specifies the number of log lines to print for each container when dumping container logs. Impliesdump_errors=true
if neitherdump_logs
nordump_errors
is specified-D coreapi_server_image=bayesian/bayesian-api
(optional, default isbayesian/bayesian-api
) - name of Bayesian core API server image-D coreapi_worker_image=bayesian/cucos-worker
(optional, default isbayesian/cucos-worker
) - name of Bayesian Worker image-D coreapi_url=http://1.2.3.4:32000
(optional, default ishttp://localhost:32000
)-D breath_time=10
(optional, default is5
) - time to wait before testing
Important: running with non-default image settings will force-retag the given
images as bayesian/bayesian-api
and bayesian/worker
so docker-compose
can find them. This may affect subsequent docker
and docker-compose
calls
Some of the tests may be quite slow, you can skip them by passing
--tags=-slow
option to behave
.
- make it possible to run the integration tests from a venv even when docker access requires sudo