Skip to content

Commit

Permalink
Move runtime testing to its own crate
Browse files Browse the repository at this point in the history
Signed-off-by: Ryan Levick <[email protected]>
  • Loading branch information
rylev committed Jan 10, 2024
1 parent 48187b0 commit 5ff3489
Show file tree
Hide file tree
Showing 41 changed files with 95 additions and 64 deletions.
13 changes: 12 additions & 1 deletion Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

9 changes: 8 additions & 1 deletion Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,7 @@ which = "4.2.5"
e2e-testing = { path = "crates/e2e-testing" }
http-body-util = { workspace = true }
testing-framework = { path = "tests/testing-framework" }
runtime-tests = { path = "tests/runtime-tests" }
test-components = { path = "tests/test-components" }
test-codegen-macro = { path = "crates/test-codegen-macro" }

Expand All @@ -114,7 +115,13 @@ llm-metal = ["llm", "spin-trigger-http/llm-metal"]
llm-cublas = ["llm", "spin-trigger-http/llm-cublas"]

[workspace]
members = ["crates/*", "sdk/rust", "sdk/rust/macro", "tests/testing-framework"]
members = [
"crates/*",
"sdk/rust",
"sdk/rust/macro",
"tests/runtime-tests",
"tests/testing-framework",
]

[workspace.dependencies]
anyhow = "1.0.75"
Expand Down
5 changes: 3 additions & 2 deletions crates/test-codegen-macro/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,10 @@ use std::{env, path::PathBuf};

/// This macro generates the `#[test]` functions for the runtime tests.
#[proc_macro]
pub fn codegen_tests(_input: TokenStream) -> TokenStream {
pub fn codegen_runtime_tests(_input: TokenStream) -> TokenStream {
let mut tests = Vec::new();
let tests_path = PathBuf::from(env!("CARGO_MANIFEST_DIR")).join("../../tests/runtime-tests");
let tests_path =
PathBuf::from(env!("CARGO_MANIFEST_DIR")).join("../../tests/runtime-tests/tests");
let tests_path_string = tests_path
.to_str()
.expect("CARGO_MANIFEST_DIR is not valid utf8")
Expand Down
16 changes: 16 additions & 0 deletions tests/runtime-tests/Cargo.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
[package]
name = "runtime-tests"
version.workspace = true
authors.workspace = true
edition.workspace = true
license.workspace = true
homepage.workspace = true
repository.workspace = true
rust-version.workspace = true

[dependencies]
anyhow = "1.0"
env_logger = "0.10.0"
log = "0.4"
reqwest = { workspace = true }
testing-framework = { path = "../testing-framework" }
44 changes: 44 additions & 0 deletions tests/runtime-tests/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# Runtime tests

Runtime tests are a specific type of test for testing the runtime behavior of a Spin compliant runtime.

For the purposes of these tests, an "application" is a collection of the following things:
* A Spin compliant WebAssembly binary
* A spin.toml manifest
* Optional runtime-config.toml files

## What are runtime tests supposed test and not test?

Runtime tests are meant to test the runtime functionality of Spin. In other words, they ensure that a valid combination of Spin manifest and some number of Spin compliant WebAssembly binaries perform in expected ways or fail in expected ways.

Runtime tests are not full end-to-end integration tests, and thus there are some things they do not concern themselves with including:
* Different arguments to the Spin CLI
* Failure cases that cause Spin not to start a running http server (e.g., malformed manifest, malformed WebAssembly binaries etc.)
* Bootstrapping WebAssembly modules into compliant WebAssembly components (e.g., turning Wasm modules created with JavaScript tooling into WebAssembly components using `js2wasm`)

## How do I run the tests?

The runtime tests can either be run as a library function (e.g., this is how they are run as part of Spin's test suite using `cargo test`), or they can be run stand alone using the `runtime-tests` crate's binary (i.e., running `cargo run` from this directory).

## How do I add a new test?

To add a new test you must add a new folder to the `tests` directory with at least a `spin.toml` manifest.

The manifest is actually a template that allows for a few values to be interpolated by the test runner. The interpolation happens through `%{key=value}` annotations where `key` is one of a limited number of keys the test runner supports. The supported keys are:

* `source`: The manifest can reference pre-built Spin compliant WebAssembly modules that can be found in the `test-components` folder in the Spin repo. The value is substituted for the name of the test component to be used. For example `%{source=sqlite}` will use the test-component named "sqlite" found in the `test-components` directory.
* `port`: The manifest can reference a port that has been exposed by a service (see the section on services below). For example, if the test runner sees `%{port=1234}` it will look for a service that exposes the guest port 1234 on some randomly assigned host port and substitute `%{port=1234}` for that randomly assigned port.

The test directory may additionally contain:
* an `error.txt` if the Spin application is expected to fail
* a `services` config file (more on this below)

### The testing protocol

The test runner will make a GET request against the `/` path. The component should either return a 200 if everything goes well or a 500 if there is an error. If an `error.txt` file is present, the Spin application must return a 500 with the body set to some error message that contains the contents of `error.txt`.

## When do tests pass?

A test will pass in the following conditions:
* The Spin web server returns a 200
* The Spin web server returns a 500 with a body that contains the same text inside of the test's error.txt file.
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
use crate::manifest_template::ManifestTemplate;
use crate::spin::Spin;
use crate::test_environment::{TestEnvironment, TestEnvironmentConfig};
use crate::{OnTestError, ServicesConfig, TestError, TestResult};
use anyhow::Context;
use std::path::{Path, PathBuf};
use testing_framework::{
ManifestTemplate, OnTestError, ServicesConfig, Spin, TestEnvironment, TestEnvironmentConfig,
TestError, TestResult,
};

/// Configuration for a runtime test
pub struct RuntimeTestConfig {
Expand Down Expand Up @@ -137,8 +137,7 @@ impl RuntimeTest<Spin> {

fn services_config(config: &RuntimeTestConfig) -> anyhow::Result<ServicesConfig> {
let required_services = required_services(&config.test_path)?;
let service_definitions = PathBuf::from(env!("CARGO_MANIFEST_DIR")).join("services");
let services_config = ServicesConfig::new(service_definitions, required_services)?;
let services_config = ServicesConfig::new(required_services)?;
Ok(services_config)
}

Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
use std::path::PathBuf;

use testing_framework::{OnTestError, RuntimeTest};
use runtime_tests::RuntimeTest;
use testing_framework::OnTestError;

fn main() -> anyhow::Result<()> {
env_logger::init();
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
6 changes: 3 additions & 3 deletions tests/runtime.rs
Original file line number Diff line number Diff line change
Expand Up @@ -4,15 +4,15 @@ mod runtime_tests {

// The macro inspects the tests directory and
// generates individual tests for each one.
test_codegen_macro::codegen_tests!();
test_codegen_macro::codegen_runtime_tests!();

fn run(test_path: PathBuf) {
let config = testing_framework::RuntimeTestConfig {
let config = runtime_tests::RuntimeTestConfig {
test_path,
spin_binary: env!("CARGO_BIN_EXE_spin").into(),
on_error: testing_framework::OnTestError::Panic,
};
testing_framework::RuntimeTest::bootstrap(config)
runtime_tests::RuntimeTest::bootstrap(config)
.expect("failed to bootstrap runtime tests tests")
.run();
}
Expand Down
1 change: 0 additions & 1 deletion tests/testing-framework/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@ edition = "2021"

[dependencies]
anyhow = { workspace = true }
env_logger = "0.10.0"
fslock = "0.2.1"
log = "0.4"
nix = "0.26.1"
Expand Down
46 changes: 0 additions & 46 deletions tests/testing-framework/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,49 +31,3 @@ Both Docker and Python based services can expose some logical port number that w

* Python: Python based services can do this by printing `PORT=($PORT1, $PORT2)` to stdout where the $PORT1 is the logical port the service exposes and $PORT2 is the random port actually being exposed (e.g., `PORT=(80, 59392)`)
* Docker: Docker services can do this by exposing the port in their Dockerfile (e.g., `EXPOSE 3306`)
The runtime tests ensure that Spin can properly run applications.

For the purposes of these tests, an "application" is a collection of the following things:
* A Spin compliant WebAssembly binary
* A spin.toml manifest
* Optional runtime-config.toml files

## Runtime tests

Runtime tests are a specific type of test for testing the runtime behavior of a Spin compliant runtime.

### What are runtime tests supposed test and not test?

Runtime tests are meant to test the runtime functionality of Spin. In other words, they ensure that a valid combination of Spin manifest and some number of Spin compliant WebAssembly binaries perform in expected ways or fail in expected ways.

Runtime tests are not full end-to-end integration tests, and thus there are some things they do not concern themselves with including:
* Different arguments to the Spin CLI
* Failure cases that cause Spin not to start a running http server (e.g., malformed manifest, malformed WebAssembly binaries etc.)
* Bootstrapping WebAssembly modules into compliant WebAssembly components (e.g., turning Wasm modules created with JavaScript tooling into WebAssembly components using `js2wasm`)

## How do I run the tests?

The runtime tests can either be run as a library function (e.g., this is how they are run as part of Spin's test suite using `cargo test`), or they can be run stand alone using the `runtime-tests` crate's binary (i.e., running `cargo run` from this directory).

## How do I add a new test?

To add a new test you must add a new folder to the `tests` directory with at least a `spin.toml` manifest.

The manifest is actually a template that allows for a few values to be interpolated by the test runner. The interpolation happens through `%{key=value}` annotations where `key` is one of a limited number of keys the test runner supports. The supported keys are:

* `source`: The manifest can reference pre-built Spin compliant WebAssembly modules that can be found in the `test-components` folder in the Spin repo. The value is substituted for the name of the test component to be used. For example `%{source=sqlite}` will use the test-component named "sqlite" found in the `test-components` directory.
* `port`: The manifest can reference a port that has been exposed by a service (see the section on services below). For example, if the test runner sees `%{port=1234}` it will look for a service that exposes the guest port 1234 on some randomly assigned host port and substitute `%{port=1234}` for that randomly assigned port.

The test directory may additionally contain:
* an `error.txt` if the Spin application is expected to fail
* a `services` config file (more on this below)

### The testing protocol

The test runner will make a GET request against the `/` path. The component should either return a 200 if everything goes well or a 500 if there is an error. If an `error.txt` file is present, the Spin application must return a 500 with the body set to some error message that contains the contents of `error.txt`.

## When do tests pass?

A test will pass in the following conditions:
* The Spin web server returns a 200
* The Spin web server returns a 500 with a body that contains the same text inside of the test's error.txt file.
2 changes: 0 additions & 2 deletions tests/testing-framework/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,11 @@
mod io;
mod manifest_template;
mod runtime_test;
mod services;
mod spin;
mod test_environment;

pub use manifest_template::ManifestTemplate;
pub use runtime_test::{RuntimeTest, RuntimeTestConfig};
pub use services::ServicesConfig;
pub use spin::Spin;
pub use test_environment::{TestEnvironment, TestEnvironmentConfig};
Expand Down
3 changes: 2 additions & 1 deletion tests/testing-framework/src/services.rs
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,8 @@ pub struct ServicesConfig {

impl ServicesConfig {
/// Create a new services config given a path to service definitions and a list of services to start.
pub fn new(definitions: PathBuf, services: Vec<String>) -> anyhow::Result<Self> {
pub fn new(services: Vec<String>) -> anyhow::Result<Self> {
let definitions = PathBuf::from(env!("CARGO_MANIFEST_DIR")).join("services");
let service_definitions = service_definitions(&definitions)?;
Ok(Self {
services,
Expand Down

0 comments on commit 5ff3489

Please sign in to comment.