-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Testing proposal #60
Comments
Thanks for breaking this down Gregor. I want to emphasize why I believe integration tests will be the most useful type of testing for us. At the end of the day, the purpose of testing is to ensure that an application works as expected. The closer our tests mimic real-world usage, the more confident we can be that the app will function properly for our users. Unit Testing: Strengths and LimitationsUnit testing focuses on testing components in isolation. While this can be useful for verifying small, self-contained pieces of functionality, the reality is that components never exist in isolation, so this is the most far removed from how a component is run Furthermore, unit tests often require extensive mocking, which can make them difficult to write and maintain. As Gregor pointed out, when we attempted to test Tab Navigator in isolation a significant amount of that work went into mocking the appropriate environment. And we were only able to test that clicking the tabs propagated the correct path name to the navigator. We were unable to test whether the correct component was rendered based on the path name, if the user was redirected, or if that route had any errors. One thing that I think unit tests are useful for are determining when pieces of code can be written in isolation. For example, most functions do not need to be tied to react state/components, so writing them in isolation so they can be tested makes the codebase simpler and easier to maintain. QA and E2E Testing: Most realistic but CostlyQA and end-to-end (e2e) testing provide the most realistic tests by running the app in a production environment. However, there are drawbacks:
Integration Testing: The Best of Both WorldsIntegration tests provide a balance between these approaches. They allow us to:
And to reiterate, if we want to test the app as close to real use as possible, mounting the entire app in a simulated environment is the ideal approach for testing. |
To address you comments directly.
Hypothetically you should be able to run react-testing-library in vitest browser mode. React testing library is agnostic to which dom environment it uses. Browser mode obviously simulates the app closest to how its actually being run. I played around with it this weekend. It has a nice interface so you can see your integration tests being run. Unfortunately I was not able to get it to run with my tests. It was throwing an error saying that I needed an ApiProvider even though i was mocking everything in @comapeo-core/react. I did not have these problems with the jsdom implementation. But even if I was able to get it to work in the browser mode, I still think using jsdom is the better alternative.
On that note, I think mocking the backend would be the most valuable tool, but I imagine this will take alot of time and effort. What we can do in the meantime is mock @comapeo-core/react functions directly in our test environements, and use mapeo-mock-data. I have been doing this and it allows for realistic injection of data without too much work. |
Here is a testing example that mounts the entire router! |
My apologies, on a re-read of this I realise my purpose is not clear. My intention is to discuss the architecture for each type of test, and to a lesser extent to decide how we choose what should be tested in each. When I say "which approach", I mean "which approach for unit/integration/e2e". It would be good to get really clear on our approach and the test architecture before we jump into coding things. |
I've been thinking more about our testing plan for desktop, based on conversations with @ErikSin. First, to get clear on terminology, this is how I will use terms in this proposal:
End-to-end Testing
The options for Electron end-to-end testing in the electron docs describe three options: WebdriverIO, Selenium and Playwright. Playwright support for electron is still experimental, and it only supports running the app in development mode, not a packaged app. I think this is too far from how a user experiences the app - we have had many cases in the past where the app works in developer mode but not when packaged.
I would lean towards WebdriverIO over Selenium purely because it seems like WebdriverIO is an option for running Appium tests on mobile, so it seems advantage to learn one API for both mobile and desktop testing. As I understand it on a skim of the docs, we can launch the packaged app with webdriverIO and interact with it and run assertions.
This will be a fairly slow option, because the app needs to be packaged and then launched for tests, and we will probably want to relaunch the app with cleared storage for each test, if we want to avoid side-effects between tests.
Unit Testing
I think unit testing is useful when we have complicated logic that either is hard to test in e2e tests, or we use the unit test to validate our logic before we get to e2e tests. In many cases it's preferable to encapsulate the logic in a hook or function that can be tested without a React rendering context, to keep the tests simple and focussed on what we want to test. Unit testing of components can be useful if there are complex interactions, such as switches and toggles.
@ErikSin and I looked into how we might unit test Tanstack router layout components, e.g. test that a tab navigator is navigating to the correct page. It's possible but requires setting up a mocked code-based router, and what we are testing is limited to "when a use clicks this tab, the router navigates to this URL".
Unit tests seem best written using react-testing-library and run with JSDom, although vitest browser mode is a viable option too.
Integration Testing
For integration tests we would mount the entire app (e.g. the Router) and interact with it in a way similar to how a user would. We can test things like "when a user clicks the settings button, a screen with settings appears, and the map remains visible". There is potentially a lot of overlap between e2e tests and integration tests, but integration tests can be faster to run and because we can mock the back-end, we can test hard-to-replicate states such as loading states and error states.
I was considering two options for integration tests: react-testing-library and vitest browser mode (via vitest-browser-react which is similar to testing-library). The difference would be the context where the tests are run (JSDom vs Chrome).
Running tests in JSDom makes mocking the backend much simpler. Because the test code is running in Node, we can create ComapeoCore instances for each test using memory storage, and we can mock any methods as needed for tests (e.g. mock a particular method to return an error).
Running the tests is vitest browser mode has the advantage of running tests in a context that is much more similar to how the app will run for users (vs. JSDom). However setup of the mocks in more complicated, because test code is running in the browser so we can't just create an instance of ComapeoCore and pass it into the context. I can see a few ways forward:
a. We mock the entire ComapeoCore interface and manually control return values for tests.
b. We setup a communication bridge between the browser and the node process controlling the tests and create temporary comapeo core instances in node, and use RPC reflector to create the instance in the test.
(a) seems a lot of setup for the initial mock (and maintaining it as we change the API), and for manually setting the return values for getting through different states to reach what we want to test. (b) is similarly a lot of complicated setup. Within each test we can't access the node process (because the tests are running in the browser) so we would need to mock the RPC reflector proxy, which is not going to play nicely with the mocking library because it's a proxy not an actual object.
Because there is so much setup and maintenance for mocking the backend if we run tests in vitest browser mode, I think for integration tests it's best just to run them in JSDom, and for many cases just running a temporary instance of comapeo-core will be the easiest option.
Next Steps
The text was updated successfully, but these errors were encountered: