Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create end to end tests #986

Open
dejanb opened this issue Nov 11, 2024 · 6 comments
Open

Create end to end tests #986

dejanb opened this issue Nov 11, 2024 · 6 comments
Assignees

Comments

@dejanb
Copy link
Contributor

dejanb commented Nov 11, 2024

We need to replicate guac end to end tests that covers correlation bugs found in v1

In my mind it would be ideal to start with dataset test and make it easy to add new test cases.

@dejanb
Copy link
Contributor Author

dejanb commented Nov 11, 2024

I don't see an easy way to initialize trustify with ds3 once and then run multiple tests against it. The solutions I found so far mention manually doing so using ctor, lazy_static or once_cell.

Do we have a preferred way of doing something like this? I would like to see if we can do this all in cargo test, instead of spawning multiple processes/containers to do so (if we can get away with it). @ctron @jcrossley3 @helio-frota

@ctron
Copy link
Contributor

ctron commented Nov 11, 2024

We do have something similar, maybe a bit more heavyweight: https://github.com/trustification/trustify-load-test-runs/

It's orchestrated through the compose.yaml

@helio-frota
Copy link
Collaborator

I don't see an easy way to initialize trustify with ds3 once and then run multiple tests against it.

the db populate is done here
( I had problems downloading the dump but we can use local dump file )

yeah I have to say that once we have things working with trustify-load-tests it turns really handy - compose run, compose down etc...

@JimFuller-RedHat JimFuller-RedHat self-assigned this Nov 11, 2024
@dejanb
Copy link
Contributor Author

dejanb commented Nov 13, 2024

Thanks for the inputs. We discussed it a bit and the idea is that we should try to reuse existing dataset test and work in iterations to get fully automated e2e suite.

  • The first step could be to break a logic that ingest data and actual testing
  • We can use isolated ingestion logic to manually set the database (and file system) with desired start
  • And then we can write tests that expects this exists and access it read-only using services and apis to check the desired state
  • This would allow us to easily write new tests and verify things while developing, but also we can later on integrate it (in rust or using compose) to run automatically

@jcrossley3
Copy link
Contributor

Pulling in @carlosthe19916 to this discussion as he's already done some work toward facilitating e2e testing which we use to affirm the integrity each PR merge.

@carlosthe19916
Copy link
Member

Hi everyone,
If it helps. the links that @jcrossley3 shared

is the strategy the UI is currently using for executing e2e tests. What it does is:

Reuse the current gh workflows is as easy as this PR https://github.com/trustification/trustify/pull/857/files (I think Jim made it even simpler with

e2e-test:
needs: publish
uses: trustification/trustify-ci/.github/workflows/global-ci.yml@main
with:
server_image: ${{ needs.publish.outputs.image }}
run_api_tests: true
run_ui_tests: true
)

I am not sure exactly what kind of tests you guys have in mind but perhaps we could write tests at https://github.com/trustification/trustify-api-tests and let the current workflows to do the rest. We only pay attention to write tests because the whole infrastructure and the tasks for providing an instance of Trustify should be taken care by some work

IMO to do proper e2e tests we have to have a real instance of Trustify simulating what the final user will do to instantiate Trustify (which in the UI case is done by the operator).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants