Skip to content

Commit

Permalink
Merge pull request #52 from jfilhoGN/15-create-examples-for-the-tips-…
Browse files Browse the repository at this point in the history
…of-the-tests

create documentation about guide, examples and API Stressify
  • Loading branch information
jfilhoGN authored Dec 9, 2024
2 parents 2eae902 + b16ee4c commit 89e40da
Show file tree
Hide file tree
Showing 7 changed files with 280 additions and 16 deletions.
8 changes: 0 additions & 8 deletions .readthedocs.yaml
Original file line number Diff line number Diff line change
@@ -1,22 +1,14 @@
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details

# Required
version: 2

# Set the OS, Python version, and other tools you might need
build:
os: ubuntu-24.04
tools:
python: "3.13"

# Build documentation in the "docs/" directory with Sphinx
sphinx:
configuration: docs/source/conf.py

# Optionally, but recommended,
# declare the Python requirements required to build your documentation
# See https://docs.readthedocs.io/en/stable/guides/reproducible-builds.html
python:
install:
- requirements: docs/requirements.txt
Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,11 +75,11 @@ After a test completes, you'll get detailed performance metrics, including:
- **Iterations**: Total number of requests executed.
- **Success Rate**: Percentage of successful requests.
- **Error Rate**: Percentage of failed requests.
- **Response Times**: Min, Max, Mean, Median, P90, P95, P99.
- **Response Times**: Min, Max, Mean, Std, Median, P90, P95, P99.
- **RPS**: Requests per second.
- **TPS**: Transactions per second.

These results are available in the terminal and the live dashboard.
These results are available in the terminal.

---

Expand Down
90 changes: 86 additions & 4 deletions docs/source/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,19 +8,101 @@ This section provides detailed documentation of the Stressify.jl API.
Functions
---------

.. function:: run_test(endpoint::String; payload::String, method::String, headers::Dict)
.. function:: options(; vus::Int=1, format::String="default", ramp_duration::Union{Float64, Nothing}=nothing, max_vus::Union{Int, Nothing}=nothing, iterations::Union{Int, Nothing}=nothing, duration::Union{Float64, Nothing}=nothing)

Determine the test configuration options, such as the number of virtual users (VUs), format, ramp duration, maximum VUs, iterations, and duration.

:param vus: The number of virtual users to simulate.
:param format: The format of the test execution "vus-ramping".
:param ramp_duration: The duration of the ramp-up period.
:param max_vus: The maximum number of virtual users to simulate.
:param iterations: The number of iterations to run. Don`t use with format "vus-ramping".
:param duration: The duration of the test in seconds.

Run a performance test against the specified API.
.. function:: run_test(requests::Vararg{NamedTuple}

Run a performance test using the specified configuration.

:param requests: A list of named tuples representing the test configuration.


Support HTTP methods
--------------------

.. function:: http_get(endpoint::String; headers::Dict)

Send an HTTP GET request to the specified endpoint.

:param endpoint: The API endpoint to test.
:param payload: The JSON payload to send with the request.
:param method: The HTTP method (e.g., "GET", "POST").
:param headers: A dictionary of headers to include in the request.
:returns: A dictionary containing the test results.

.. function:: http_post(endpoint::String; payload::String, headers::Dict)

Send an HTTP POST request to the specified endpoint.

:param endpoint: The API endpoint to test.
:param payload: The JSON payload to send with the request.
:param headers: A dictionary of headers to include in the request.
:returns: A dictionary containing the test results.

.. function:: http_put(endpoint::String; payload::String, headers::Dict)

Send an HTTP PUT request to the specified endpoint.

:param endpoint: The API endpoint to test.
:param payload: The JSON payload to send with the request.
:param headers: A dictionary of headers to include in the request.
:returns: A dictionary containing the test results.

.. function:: http_delete(endpoint::String; headers::Dict)

Send an HTTP DELETE request to the specified endpoint.

:param endpoint: The API endpoint to test.
:param headers: A dictionary of headers to include in the request.
:returns: A dictionary containing the test results.

.. function:: http_patch(endpoint::String; payload::String, headers::Dict)

Send an HTTP PATCH request to the specified endpoint.

:param endpoint: The API endpoint to test.
:param payload: The JSON payload to send with the request.
:param headers: A dictionary of headers to include in the request.
:returns: A dictionary containing the test results.


Report Generation
-----------------

.. function:: generate_report(results::Dict)

Generate a detailed report from test results.

:param results: The results dictionary from a test run.
:returns: A JSON string representing the report.

.. function:: save_results_to_json(results::Dict, filepath::String)

Save the test results to a JSON file.

:param results: The results dictionary from a test run.
:param filepath: The path to the output file.

Data Utils
----------

.. function:: random_csv_row(file_path::String)

Get a random row from a CSV file.

:param file_path: The path to the CSV file.
:returns: A dictionary representing a row from the CSV file.

.. function:: random_json_row(file_path::String)

Get a random row from a JSON file.

:param file_path: The path to the JSON file.
:returns: A dictionary representing a row from the JSON file.
2 changes: 1 addition & 1 deletion docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
sys.path.insert(0, os.path.abspath('.'))

project = 'Stressify'
author = 'jfilhogn'
author = 'João Martins Filho @jfilhogn'
release = '0.1'

extensions = [
Expand Down
172 changes: 172 additions & 0 deletions docs/source/examples.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,172 @@
Examples
========

This page provides a detailed overview of the examples included in the **Stressify.jl** project. The examples demonstrate how to use the tool to test and analyze the performance of APIs with various configurations.
The examples are designed to be easy to understand and modify, making them ideal for beginners and experienced users alike, and the examples you can find in the `examples` directory of the project `Stressify`_ repository.

.. _Stressify: https://github.com/jfilhoGN/Stressify.jl/tree/main/examples
.. _Documentation: https://stressifyjl.readthedocs.io/en/latest/

Available Examples
-------------------

1. **FirstTest.jl**
- **Description**: A basic example showcasing a simple performance test configuration.

**Key Features**:

- Loads API endpoints.
- Monitors basic performance metrics.

**How to Run**:
Execute the following command:
```
julia examples/firstTest.jl
```
**Purpose**: Ideal for beginners to understand the basic structure of a performance test.

2. **SecondTest.jl**
- **Description**: A basic example showcasing how to use iterations and number os VUs.

**Key Features**:

- Loads API endpoints.
- Monitors basic performance metrics.
- Uses iterations and VUs.

**How to Run**:
Execute the following command:
```
julia examples/secondTest.jl
```
**Purpose**: Ideal for beginners to understand the basic structure of a performance test and used VUs and iterations.

3. **ThirdTest.jl**
- **Description**: Example to use duration in seconds instead of iterations. The code will execute until the duration time.

**Key Features**:

- Loads API endpoints.
- Monitors basic performance metrics.
- Uses durations and VUs.

**How to Run**:
Execute the following command:
```
julia examples/thirdTest.jl
```
**Purpose**: Ideal for beginners to understand the basic structure of a performance test and used VUs and duration time of the test.

4. **FourthTest.jl**
- **Description**: Example to use many endpoints in the test. Ideal to create scenarios of the test. In Stressify the methods accepted is GET, POST, PUT, DELETE, PATCH.

**Key Features**:

- Loads API endpoints.
- Monitors basic performance metrics.
- Uses durations and VUs.
- Uses many endpoints.
- Uses many methods.

**How to Run**:
Execute the following command:
```
julia examples/fourthTest.jl
```
**Purpose**: Ideal for beginners to understand the basic structure of a performance test and how to use many methods inside the Stressify.

5. **FifthTest.jl**
- **Description**: Example to use the Stressify function to get values in a CSV file. Ideal to execute many requests with many different values

**Key Features**:

- Loads API endpoints.
- Monitors basic performance metrics.
- Used to get values from a CSV file.

**How to Run**:
Execute the following command:
```
julia examples/fifthTest.jl
```
**Purpose**: Ideal for understanding how to use the Stressify function to get values from a CSV file and execute many requests with different values.

6. **SixtyhTest.jl**
- **Description**: Example to use the Stressify function to get values in a JSON file. Ideal to execute many requests with many different values

**Key Features**:

- Loads API endpoints.
- Monitors basic performance metrics.
- Used to get values from a JSON file.

**How to Run**:
Execute the following command:
```
julia examples/sixthTest.jl
```

**Purpose**: Ideal for understanding how to use the Stressify function to get values from a JSON file and execute many requests with different values.

7. **SeventhTest.jl**
- **Description**: Example to use the Stressify function to checked the return from the endpoint wich you are testing. Check view the return from the endpoint.

**Key Features**:

- Loads API endpoints.
- Monitors basic performance metrics.
- Check the return from the endpoint.

**How to Run**:
Execute the following command:
```
julia examples/seventhTest.jl
```

**Purpose**: Ideal for API testing and check the return from endpoint are you testing.

8. **EightTest.jl**
- **Description**: Example of how to use Stressify to create tests that require a ramp-up of virtual users for a certain period of time.

**Key Features**:

- Loads API endpoints.
- Monitors basic performance metrics.
- Ramp-up of virtual users from a certain period of time.

**How to Run**:
Execute the following command:
```
julia examples/eightTest.jl
```

**Purpose**: Ideal for API testing and check the return from endpoint are you testing.


How to run the examples?
------------------------

To run any of the examples, follow these steps:

1. Clone the **Stressify.jl** repository from GitHub.
2. Navigate to the `examples` directory.
3. Run the desired example using the Julia command-line interface.
4. ```julia examples/FirstTest.jl```
5. Monitor the output for test results and performance metrics.
6. Analyze the generated reports to gain insights into the API performance.

Customizing Examples
--------------------

Each example is designed to be easily modified to suit your specific testing needs. Refer to the **API Documentation** for details on available methods, configurations, and metrics.

Feedback and Contributions
---------------------------

We welcome feedback and contributions! If you have ideas for new examples or improvements to the existing ones, feel free to:

- Open an issue in the repository.
- Submit a pull request with your changes.

For further details, visit the `Documentation`_.

19 changes: 18 additions & 1 deletion docs/source/guide.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,21 @@
User Guide
==========

Welcome to the **Stressify.jl** User Guide. This document will walk you through the basic steps of using Stressify for performance testing.
Welcome to the **Stressify.jl** User Guide. This document will walk you through the basic steps of using Stressify for performance testing.

Stressify Performance Testing was created to facilitate the implementation of various types of performance tests and to be an Open Source project with the objective of mitigating the use by both the engineering team, QAOPs and Infrastructure.

Its simple format was designed so that tests can be created in minutes and with the possibility of adding them to continuous testing pipelines.

Finally, using the Julia language, it was also created to facilitate the creation of customized metrics. In our project, the following metrics are mapped:

- **Iterations**: Total number of requests executed.
- **Success Rate**: Percentage of successful requests.
- **Error Rate**: Percentage of failed requests.
- **Response Times**: Min, Max, Mean, Std, Median, P90, P95, P99.
- **RPS**: Requests per second.
- **TPS**: Transactions per second.

However, with Julia's vast mathematical and statistical library, it facilitates the creation of new metrics from the execution return, with a dictionary with all response times.


1 change: 1 addition & 0 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ Contents:

guide
api
examples

Indices and tables
==================
Expand Down

0 comments on commit 89e40da

Please sign in to comment.