Generic performance testing system #2324
Replies: 5 comments
-
Generating requestsCCF nodes receive commands as HTTP (or Websockets) requests over a TLS connection. These commands can be created and fully serialised independently of their submission (eg, using Python requests' Prepared Requests or CCF's Our existing tests using the We have several different applications that we need to run performance tests on, and they each have different HTTP APIs. To benchmark the logging sample app we call The output of this step can be very simple, a table where each entry contains message ID and serialised HTTP request. It would look something like this:
The message ID should be a unique value generated by this tool, used to correlate the requests and responses which are later submitted (ie - the response is keyed by this message ID, rather than needing to store request and response together). The HTTP requests will be application specific - this example shows what the expected stream for a perf test our logging sample app might look like (all calling This can easily be ingested by another tool which creates a TLS connection and submits these to CCF. Where there's a lot of repeated data (eg - common headers), this file should compress very well, so it should be fine to check in pre-generated request loads to version control for consistent, reproducible testing. Parquet looks like a good file format for this. |
Beta Was this translation helpful? Give feedback.
-
Submitting requestsGiven a stream of requests produced by the tool above, a simple client should be able to:
In the simplest case, it should submit all of these, wait for their responses, and then terminate, but it should also have some configurable options:
Additionally we want to poll until these transactions have been committed (calling the The output of this step is 2 files, one containing the sent request and another the received responses, all with timestamps: Sends
Receives
|
Beta Was this translation helpful? Give feedback.
-
Reading and writing from and to parquet files in C++: https://arrow.apache.org/docs/cpp/parquet.html#filereader |
Beta Was this translation helpful? Give feedback.
-
Analysing resultsGiven those sent and received files, it should be easy to load them, parse any details that need to be parsed, and produce performance metrics. For instance, calculating the total time between the first send and the last receive, and a throughput rate derived from that. This kind of analysis (especially if the files are a common format like Parquet) is a natural fit for pandas. As a minimum, we need a tool which takes those files and prints the transaction rate. Ideally, this prints several more metrics, and is configurable/extensible to add more metrics in future. |
Beta Was this translation helpful? Give feedback.
-
Description of Performance Test Tools
Cons:
Cons:
Cons:
Cons:
Cons:
Cons:
Other performance testing tools: |
Beta Was this translation helpful? Give feedback.
-
Creating this discussion to flesh out our planning around #848. Our existing performance testing framework is brittle and was designed to support a custom protocol which is no longer needed. We plan to replace it with a simpler system which clearly separates the steps of generating a request stream, submitting it to CCF, and analysing the results.
For parity with the current performance tests this needs to:
We're hoping the new system will do this more cheaply and flexibly, while also supporting additional functionality:
I see the new system as 3 separate tools with 3 supporting file formats, which can be chained together for a simple test or run separately/offline when needed. I'll discuss these in more detail in separate comments below, but at the top level these are:
Beta Was this translation helpful? Give feedback.
All reactions