Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

review of practices #30

Open
wants to merge 13 commits into
base: main
Choose a base branch
from
89 changes: 47 additions & 42 deletions docs/content/practices/state_clearing.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,23 @@
# State clearing
Whenever we execute UI tests, it is likely that we read/write some data locally.
These changes can affect the execution of the subsequent tests, for example:

This question appears as soon as you need to run more than 1 ui test.
* We run `Test1`, it performs some http requests, saves some data to files and databases.
* When `Test1` is finished, `Test2` will be launched.
* However, `Test1` left some data on the device which can be a reason of `Test2` failing.

## Problem
That's where *state clearing* comes to the rescue: clear the data before each test

We run `Test1`, it performs some http requests, saves some data to files and databases.
<br/>When `Test1` is finished, `Test2` will be launched.
<br/>However, `Test1` left some data on the device which can be a reason of `Test2` failing.
## Strategies for state clearing

Solution — clear the data before each test
There are a few strategies to deal with this:

## 1. Clearing within a process
1. Clearing within a process
2. Clearing package data

In this case, we don't kill our application process, and we have 2 options here:
### 1. Clearing within a process

The state clearing happens *without killing the application process*. We have 2 options here:

##### Use component from a real code base <br/>

Expand All @@ -28,16 +33,16 @@ application:
Databases, Files, Preferences and Runtime cache, and should be executed before each test.
!!! danger

This solution is a bottleneck and it's better to avoid it at all. If LogoutCleaner is broken, all of the tests will be failed.
This solution is a bottleneck and it's better to avoid it at all. If LogoutCleaner is broken, all of the tests will be failed.

<br/>

##### Clear internal storage <br/>

All cache in an android application is stored in the internal storage: `/data/data/packagename/`
All cache data (e.g.local databases, shared preferences and some files) in any android application is written in the internal storage: `/data/data/packagename/`
<br/>This storage is our application sandbox and can be accessed without any permission.

Basic idea is to avoid using components from a real code base. Instead of them, use some tests rules which do the job
In order to avoid any issues, the basic idea is to avoid using components from a real code base. Instead of them, use some tests rules which do the job
for us.

```kotlin
Expand All @@ -58,32 +63,33 @@ them [here](https://github.com/AdevintaSpain/Barista/tree/master/library/src/mai

!!! warning

This solution won't in 100% of cases:
This solution won't work in 100% of cases:

1. You may have runtime cache, which can also affect your tests
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not get this point with the runtime cache. What is stored in there and how it affects? Maybe an example would help

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Imagine if we store our data in databases and in runtime cache, like hashmap
Clearing of internal storage won't clear it. We have to clear it somehow, the only way -- use some production code, which is a bottleneck

2. Test or application process may crash and prevent the launch of next tests
2. The test or the application process may crash and prevent the launch of next tests

##### Conclusion<br/>

These are pros/cons for both solutions which don't kill the process:

Fast implementation<br/>
Easy implementation. Simply add the corresponding TestRules<br/>
➕ Fast execution in the same process<br/>
<br/>
➖ Don't give you any guarantee that your app will be cleared properly<br/>
➖ Application or Test process killing will break tests execution <br/>
➖ Can be a bottleneck<br/>
➖ Can be a bottleneck<br/>
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this needs some short hint of why


Use these solutions only as a temp workaround, because it won't work on perspective in huge projects
Use these solutions only as a temp workaround, because it won't work in long-term for huge projects

## 2. Clearing package data
### 2. Clearing package data

Our aim is to simulate the same behavior as when user presses the `clear data` button in application settings.
<br/>Application process will be cleared in that case, our application will be started in a cold start.
Our aim is to simulate the same behavior as when the user presses the `clear data` button in application settings.
<br/>Application process will be cleared in that case, our application will be initialized in a cold start.

##### Orchestrator

Basically, you can achieve an isolated state, if you execute your tests like this:
The Android Orchestrator aims to isolate the state of each test by running each of them in a separate process:
That can be achieved by executing your tests like this

```bash
adb shell am instrument -c TestClass#method1 -w com.package.name/junitRunnerClass
Expand All @@ -92,22 +98,17 @@ adb shell am instrument -c TestClass#method2 -w com.package.name/junitRunnerClas
adb pm clear
```

Each test should be executed in an isolated instrumented process and junit reports should be merged into a big one
report when all tests are finished.

That's the common idea of `Orchestrator`.
That's the idea behind of `Orchestrator`.
<br/>
It's just an `apk` which consist of
only [several classes](https://github.com/android/android-test/tree/master/runner/android_test_orchestrator/java/androidx/test/orchestrator)
and runs tests and clears data, as described above.
It's an `apk` which only consists of [several classes](https://github.com/android/android-test/tree/master/runner/android_test_orchestrator/java/androidx/test/orchestrator)
that run tests and clear data, as described above.

You should install an `orchestrator` along with `application.apk` and `instrumented.apk` on the device.
It is necessary to install an `orchestrator` along with the `application.apk` and the `instrumented.apk` on the device.

However, it's not the end.
But that's not all.
<br/>
Orchestrator should somehow execute adb commands. Under the hood, it
uses [special services.](https://github.com/android/android-test/tree/master/services)
It's just a shell client and should be installed to the device.
The Orchestrator also needs to execute adb commands. For that it uses [special services.](https://github.com/android/android-test/tree/master/services) under the hood.
It's just a shell client and should be installed on the device.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After reading this, I was doubting whether I have to install it on my own, or the Orchestrator does it for me (this one right?). This should be clarified


![alt text](../images/orchestrator.png "orchestrator and test-services")

Expand All @@ -118,37 +119,41 @@ It's just a shell client and should be installed to the device.
Despite the fact that it does the job, this solution looks overcomplicated:

1. We need to install +2 different apk to each emulator
2. We delegate this job to the device instead of host machine.
2. We delegate this job to the device instead of the host machine.
<br/>Devices are less reliable than host pc
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Less reliably in the sense of "they do not lose connection to adb server, issues with battery, internet, system dialogs," or sth like that?


##### Other solutions

It's also possible to clear package data by
using [3rd party test runners](https://android-ui-testing.github.io/Cookbook/practices/test_runners_review/), like
Marathon, Avito-Runner or Flank. Marathon and Avito-Runner clear package data without an orchestrator. They delegate
this logic to a host machine
this logic to a host machine.

##### Conclusion<br/>

These are pros/cons for an `orchestrator` and 3rd party test runners solution:

➕ Does the job for us in 100% <br/>
➕ Does 100% of the job for us<br/>
➕ Avoid test failing in cascade due to the application process being killed<br/>
<br/>
➖ Slow execution _(can take 10+ seconds and depends on apk size)_ <br/>
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

10+ was for ALL the tests together due to adb pm clear right? This I did not understand very well

Orchestrator — over-complicated <br/>
➖ Slow execution <br/>
Requires to install extra components — over-complicated: <br/>

Each `adb pm clear` takes some time and depends on apk size. Below you may see some gaps between the tests which
represent such a delay
The slow execution has 2 sources:
1. The time consumed in killing and restarting the process where each test runs, multiplied by the amount of tests.
2. Executing `adb pm clear` after each test takes some time and depends on apk size. Below you may see some gaps between the tests which represent such a delay.

![alt text](../images/package_clear.png "ADB package clearing takes some time")


## Final conclusion
!!! success

Only package clear can guarantee that your data will be celared properly.
Only package clearing can guarantee that the data will be cleared properly between test executions.
Marathon and Avito-Runner provide the easiest way to clear application data.

1. You can set them just by one flag in configuration
2. They don't use orchestrator under the hood
1. One simply needs to set a flag in their configuration
2. They don't use orchestrator under the hood, avoiding its caveats



Expand Down
71 changes: 37 additions & 34 deletions docs/content/practices/test_runners_review.md
Original file line number Diff line number Diff line change
@@ -1,64 +1,67 @@
# Test runners

Test runner is responsible for tests run and providing test result for us.
Test runner is a test component responsible for.
1. Preparing the test runs
2. Providing test results

`AndroidJunitRunner` — Official solution and low-level instrument. It requires a lot of effort from engineers to run
tests on CI and make them stable.
`AndroidJunitRunner` — The official solution and low-level instrument. It requires a lot of effort from engineers to run
tests on the CI and make them stable.

It's worth to mention — tools are getting better year by year. However, some basic functionality still doesn't work from
the box properly.
It's worth mentioning that UI testing tools are getting better year by year. However, some basic functionality still doesn't work out
of the box properly.

### 1. Problems with AndroidJunitRunner:

* Overcomplicated solution with clearing
* Overcomplicated solution with state clearing
<br>
_It would be good to have only one flag which does application clearing for us.
_It would be good to do the state clearing of the application by setting a flag.
<br>
It exists, however to have scalability on CI and opportunity to use filters, you still have to
install `test-services.apk` and `orchestrator.apk` to each device manually_
It exists, however to make it scalable on the CI as well as the opportunity to use filters, you still have to
install `test-services.apk` and `orchestrator.apk` on each device manually_


* Impossibility to scale
<br>
_As soon as you started your tests, it's impossible to connect more devices to tests run on fly_
_As soon as you started your tests, it's impossible to let more devices join the test execution on the fly_
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think explaining why it can help to add test on the fly and what is the case would help.

I was thinking about running test on a device cloud? There could be more available after a while? @NoNews what is the case exactly?


* Impossibility to prevent flakiness
<br>
_Flakiness is one of the main problems in instrumented testing. Test runner should play role as a latest flakiness
protection level, like support retries from the box or other strategies_
_Flakiness is one of the main problems in instrumented testing. Test runners should provide some mechanisms to fight against
flakiness, like support to retry failing tests among other strategies_

* Impossibility to validate flakiness
<br>
_Flakiness can be validated by running each test multiple times, and if test pass N/N, it's not flaky. It would be
_Flakiness can be validated by running each test multiple times. If the test passes every single time it runs, it's not flaky. It would be
great to launch each test 100 times by one command_

* Poor test report
<br>
_Default test report doesn't show any useful information. As an engineer, I want to see a video of the test, logs and
to make sure that test hasn't been retried. Otherwise, I'd like to see retries and each retry video and logs._
_The default test report doesn't show enough valuable information for each test. As an engineer, I want to see a video of the test and its logs. Moreover, I'd like to know whether
the test has been retried. If yes, I'd also like to see how many retries, and their corresponding videos and logs_

* Impossibility to retry
<br>
_It's possible to do only via special test rule which does retry for us. However, it's up to test runner to retry each
test, as instrumented process can be crashed and device less reliable than host machine._
_It's only possible to do it via a special test rule which does the retries for us. However, it's up to the test runner to retry each
test. That's because: instrumented process might have crashed and device less reliable than host machine._
1. *The instrumented process might have crashed*. In this case the test rule may not execute the code where the retry actually happens.
2. **
<br>
_Also, it should be possible to define maximum retry count: Imagine, your application crashed on start. We shouldn't
retry each test in that case because there is no sense to overload build agents on CI._
_Also, it should be possible to define maximum retry count: Imagine, your application reliably crashes on start, and you have plenty of tests executing that code. We shouldn't
retry each test in that case: we would overload build agents on the CI with tests that are doomed to fail._


* Impossibility to record a video
<br>
_It's possible to achieve and implement manually, however It would be really great to have such functionality from the
box_
_It's possible to achieve and implement manually. However, it would be really great to have such a functionality already built-in_

Almost all of that problems possible to solve, but it can take weeks or even months of your time. Beside running tests,
Almost all of those problems can be solved, but it can take weeks or even months of your time. Beside running tests,
you also need to care about writing tests which is challenging as well.
<br>
It would be great to have that problems solved from the box
Having those problems solved for you lets you focus on other tasks.

### 2. Open source test runners

All of them used `AndroidJunitRunner` under the hood, as it's the only possibility tun run instrumented tests.
All of them use `AndroidJunitRunner` under the hood, as it's the only possibility to run instrumented tests.

#### [:green_square: 2.1 Marathon](https://github.com/MarathonLabs/marathon)

Expand All @@ -72,25 +75,25 @@ do the whole job for you.
➕ Dynamic test batching (test count/test duration) <br>
➕ Smart retries with a quotas <br>
➕ Screenshots & video out of the box <br>
➕ Improved test report with video & logs <br>
➕ Automatically rebalanced test execution if connecting/disconnecting devices on fly <br>
➕ Improved test report with video & logs <br>
➕ Automatically rebalanced test execution if connecting/disconnecting devices on the fly <br>
➕ Pull files from the device after test run,
e.g. [allure-kotlin](https://github.com/allure-framework/allure-kotlin) <br>
➕ Basic [Allure](https://github.com/allure-framework) support out of the box <br>
➕ Basic [Allure](https://github.com/allure-framework) support out of the box <br>
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is not clear what Allure does for someone who has not used it:
video (what is mentioned above), reports...
I'd rather mention the most relevant, that have not been mentioned above, or exclude video form above and put it here (can Marathon record videos without Allure?) @NoNews

➕ adb client `ddmlib` replacement: [Adam](https://github.com/Malinskiy/adam) <br>
➕ Cross-platform (iOS support) <br>
➕ Fragmented test execution (similar to AOSP's sharding): split large testing suites into multiple CI builds <br>
➕ Parallel execution of parameterised tests <br>
➕ Interactions with adb/emulator from within a test (e.g. fake fingerprint or GPS) <br>
➕ Interactions with adb/emulator from within a test (e.g. fake fingerprint or GPS) <br>
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like Kaspresso does already half of these things :P

➕ Code coverage support <br>
➕ Testing multi-module projects in one test run <br>
➕ Testing multi-module projects in one test run <br>
➕ Flakiness fixing mode to verify test passing probability improvements <br>

➖ Doesn't auto-scale devices <br>
_(Marathon will utilise more devices in runtime if some other system connects more to the adb, but marathon itself will
not spawn more emulators for you)_<br>
➖ HTML report doesn't contain test retry information (but the Allure report does) <br>
➖ For complex test executions that solve test flakiness requires an installation of TSDB (InfluxDB or Graphite) <br>
➖ For complex test executions that solve test flakiness requires to install TSDB (InfluxDB or Graphite) <br>

[Documentation](https://marathonlabs.github.io/marathon/)

Expand All @@ -100,15 +103,15 @@ Powerful test runner. Works directly with `Kubernetes`

➕ Easy data clearing _(without an Orchestrator)_ <br>
➕ Auto-scaling on fly _(There is a coroutine in the background which tries to connect more devices)_
➕ Retries
➕ Retries

➖ Complicated adoption <br>

This test runner has been using by Avito company for 4+ years and runs thousands tests every day. It's not as powerful
as Marathon, however it doesn't have an analogue in terms of auto scaling from the box.<br>
This test runner has been used by Avito for 4+ years and runs thousands of tests every day. It's not as powerful
as Marathon, however it doesn't have an analogue in terms of auto-scaling out of the box.<br>
If you want to run your UI tests on pull requests in a large teams, this test runner is one of the best option.

Engineers from Avito are ready to help with adoption. You can contact to [Dmitriy Voronin](https://github.com/dsvoronin)
Engineers from Avito are ready to help with adoption. You can reach out to [Dmitriy Voronin](https://github.com/dsvoronin)

[Documentation](https://avito-tech.github.io/avito-android/test_runner/TestRunner/)

Expand Down