Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test Definition #4

Open
ek5932 opened this issue May 29, 2019 · 2 comments
Open

Test Definition #4

ek5932 opened this issue May 29, 2019 · 2 comments

Comments

@ek5932
Copy link
Owner

ek5932 commented May 29, 2019

@MainMa, I can't find a way to have a discussion on here so I'm using an issue to do so.

To create something functional, I think it makes sense to use an Excel based format for defining tests for now - similar to the previous BDD tests I showed you. Ideally, there would be a UI with a designer instead, but that's a lot of work. There's still a lot of benefit to having a tool that works with Excel files, so maybe it's more realistic as a MVP.

I started coming up with a design for the syntax - it would be good to get fresh input.
Here's the design so far:

Capture

First thing to call out is the notion of groups. In my previous tool it was difficult to determine at what particular step a test would fail without trailing through the logs. The introduction of groups allows you to optionally group multiple commands, which will allow you at the end to see a more granular level of detail of what passed, vs only seeing if the test itself passed. This may not be necessary though, as we could log the results of individual steps.

Next is the two types of actions; when and then; essentially the doing and checking parts. Each part allows you to set multiple properties of the request or response (maybe there should be different syntax for the command !http-Put vs a param of that command !Content). It should hopefully be self explanatory. For the content itself, we will need a way to define literal values, vs mapped - where the value is looked up. Also, it will need to support hierarchical data, which I see as one of the limitations of Excel.

Here's my current list of things to figure out:

  • Where to define mapped data values
  • How to define if a field is a mapped value
  • How to define hierarchical content (request & response)
  • How to define authorised calls? (different tests should support different credentials)
  • Where are credentials stored for auth'd calls

Perhaps we could create a schema for each set of content, detailing the type, display vs api name and whether it's a mapped or literal value. This may be verbose though?

@arseni-mourzenko
Copy link
Collaborator

(Using GitHub issues for discussions like this one is, IMHO, perfectly fine, since a discussion can lead to a change in the code, as well as a bug report can lead to a discussion about the design of the actual product.)

Regarding the format, I find it indeed much clearer than the original Excel files. A few aspects, however, bother me:

  1. I don't see usefulness of “#” and the exclamation point prefixes, and since the product targets non-developers, the potential users won't understand it either.

    If the goal is to determine, programmatically, what is what, I think it can be done relatively easy based on the position of the fields and, partially, by the content. For instance, a cell containing the text “StatusCode” and preceded by a column which contains “Then” somewhere above and followed by a cell with a number is very likely to be a property type.

  2. Shouldn't there be a way to reuse blocks of tests?

    Say, for instance, I have an API which handles customers. Each customer, once authenticated, can access his personal information, such as the shipment or billing address, or change things, such as opt in/out for marketing e-mails. Obviously, it would be a mistake to let a customer access information about another customer; therefore, there should be tests to ensure that whenever customer A wants to act on data of customer B, the API responds with HTTP 403. I could write individual tests, one for the route GET, POST and DELETE of /address/shipment, another one for GET and POST of /address/billing, and another one for GET and POST for /marketing/opt-in, but later on, if I need to change anything (say API architects decided to add a specific error message coming within the HTTP 403 response), I have to do the change in all three tests, and may miss one of them.

  3. How common functionality should be specified?

    Some APIs have specific requirements as to the way the caller should use them. Some have very specific authentication mechanisms. Others (such as Amazon services) require to include the hash of the request within the request itself in order to mitigate the risk of tampering. Others need specific headers to be included, such as correlation IDs.

    I'm hesitant to consider them as edge cases, since the APIs with “special cases” seem to be much more numerous than APIs without them (especially in corporate world).

    I suppose that such cases can be processed directly through the source code, but this means that not everything can be handled through the Excel file: test cases belong to Excel, and the environment belongs to the source code. In this case, how should this third party source code be hosted and executed on test agents?

There is, indeed, an additional problem with the credentials when the tests should access the API using different profiles, but you already mentioned it.

Last but not least, I'm wondering, should the Excel files be the only way to write the tests? In fact, there may be several objections to that:

  1. The product may target developers as well, and developers are usually not particularly favorable to the idea of using Excel for anything, since it presents several problems (it cannot be easily versioned in their preferred version control app, it's not code, and it's Excel, as opposed to their favorite IDE).

  2. It won't be possible, later on, to add support for a diff between different test definitions.

  3. It would be cumbersome, for us, to test our own product if we have to use Excel files every time.

  4. In the long term, it could pollute the design of the product with Excel internals, making it more difficult later on to add support for an UI with a designer.

Shouldn't we, instead, have a two-layer approach?

  • One component would have a task of parsing the Excel file and building a text-based description which would follow a specific format. This format would be documented, so users who do not intend using Excel could simply write their tests directly using this text format instead.

  • The second component would parse the plain text version of the test, convert it to an AST, and pass it down to the thing which is expected to run the actual tests.

This way, the Excel files appear just as one of the alternatives for defining tests. Other ones would consist of writing tests directly in plain text, or, somewhere in the future, using an UI designer, or maybe using a third-party tool which would create regression tests automatically from Swagger.

@ek5932
Copy link
Owner Author

ek5932 commented Jun 3, 2019

  1. I don't see usefulness of “#” and the exclamation point prefixes, and since the product targets non-developers, the potential users won't understand it either.

As you mentioned; it is for the purpose of making our life easier to parse. I agree it does make it more technical, so we can try exclude them.

  1. Shouldn't there be a way to reuse blocks of tests?

Yes, but it's harder to build using Excel/Test files, vs building a website where you can easily link pre-existing steps.

In our previous tool we had the ability to include other files in a test, which we used for importing data mappings and schema's. Before this we would have definitions in each test and it become very verbose and not too obvious what is going on. IMO the test definition itself should focus on the actual testing, rather than setup.

Perhaps, as well as defining explicit tests as in the example above, we can allow step templates, which the user only needs to specify input/output values. Something like:

Capture

With steps like Create-order mapping to a definition, specifying the URL action(s), header values, etc. The only problem with this is it does make it more complicated. In my experience developers would generally define these mappings and would then give examples to the business to create tests.

3 How common functionality should be specified?

For things like authentication, I was thinking about including specific drivers which the user could use. Things like OAuth2, AWS, Azure, etc - each with different parameters specific to the type. If we could define them using templates; as discussed in the previous point - even better! Most cases are a series of HTTP operations so I'm sure we could recreate them.

Need to shoot now - will answer your other comments later tonight.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants