From 6e0b5910ff59a2af78fe74ce8d247b192026100d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Sergio=20Sastre=20Fl=C3=B3rez?= Date: Tue, 7 Dec 2021 21:56:27 +0100 Subject: [PATCH 01/11] review of state clearing --- docs/content/practices/state_clearing.md | 69 ++++++++++++------------ 1 file changed, 36 insertions(+), 33 deletions(-) diff --git a/docs/content/practices/state_clearing.md b/docs/content/practices/state_clearing.md index 9edfeae..fb43ed2 100644 --- a/docs/content/practices/state_clearing.md +++ b/docs/content/practices/state_clearing.md @@ -1,18 +1,23 @@ # State clearing +Whenever we execute UI tests, it is likely that we read/write some data locally. +These changes can affect the execution of the subsequent tests, for example: -This question appears as soon as you need to run more than 1 ui test. +* We run `Test1`, it performs some http requests, saves some data to files and databases. +* When `Test1` is finished, `Test2` will be launched. +* However, `Test1` left some data on the device which can be a reason of `Test2` failing. -## Problem +That's where *state clearing* comes to the rescue: clear the data before each test -We run `Test1`, it performs some http requests, saves some data to files and databases. -
When `Test1` is finished, `Test2` will be launched. -
However, `Test1` left some data on the device which can be a reason of `Test2` failing. +## Strategies for state clearing -Solution — clear the data before each test +There are a few strategies to deal with this: -## 1. Clearing within a process +1. Clearing within a process +2. Clearing package data -In this case, we don't kill our application process, and we have 2 options here: +### 1. Clearing within a process + +The state clearing happens *without killing the application process*. We have 2 options here: ##### Use component from a real code base
@@ -34,10 +39,10 @@ Databases, Files, Preferences and Runtime cache, and should be executed before e ##### Clear internal storage
-All cache in an android application is stored in the internal storage: `/data/data/packagename/` +All cache data (e.g.local databases, shared preferences and some files) in any android application is written in the internal storage: `/data/data/packagename/`
This storage is our application sandbox and can be accessed without any permission. -Basic idea is to avoid using components from a real code base. Instead of them, use some tests rules which do the job +In order to avoid any issues, the basic idea is to avoid using components from a real code base. Instead of them, use some tests rules which do the job for us. ```kotlin @@ -58,10 +63,10 @@ them [here](https://github.com/AdevintaSpain/Barista/tree/master/library/src/mai !!! warning - This solution won't in 100% of cases: + This solution won't work in 100% of cases: 1. You may have runtime cache, which can also affect your tests - 2. Test or application process may crash and prevent the launch of next tests + 2. The test or the application process may crash and prevent the launch of next tests ##### Conclusion
@@ -76,14 +81,15 @@ These are pros/cons for both solutions which don't kill the process: Use these solutions only as a temp workaround, because it won't work on perspective in huge projects -## 2. Clearing package data +### 2. Clearing package data -Our aim is to simulate the same behavior as when user presses the `clear data` button in application settings. -
Application process will be cleared in that case, our application will be started in a cold start. +Our aim is to simulate the same behavior as when the user presses the `clear data` button in application settings. +
Application process will be cleared in that case, our application will be initialized in a cold start. ##### Orchestrator -Basically, you can achieve an isolated state, if you execute your tests like this: +The Android Orchestrator aims to isolate the state of each test by running each of them in a separate process: +That can be achieved by executing your tests like this ```bash adb shell am instrument -c TestClass#method1 -w com.package.name/junitRunnerClass @@ -92,22 +98,17 @@ adb shell am instrument -c TestClass#method2 -w com.package.name/junitRunnerClas adb pm clear ``` -Each test should be executed in an isolated instrumented process and junit reports should be merged into a big one -report when all tests are finished. - -That's the common idea of `Orchestrator`. +That's the idea behind of `Orchestrator`.
-It's just an `apk` which consist of -only [several classes](https://github.com/android/android-test/tree/master/runner/android_test_orchestrator/java/androidx/test/orchestrator) -and runs tests and clears data, as described above. +It's an `apk` which only consists of [several classes](https://github.com/android/android-test/tree/master/runner/android_test_orchestrator/java/androidx/test/orchestrator) +that run tests and clear data, as described above. You should install an `orchestrator` along with `application.apk` and `instrumented.apk` on the device. -However, it's not the end. +But that's not all.
-Orchestrator should somehow execute adb commands. Under the hood, it -uses [special services.](https://github.com/android/android-test/tree/master/services) -It's just a shell client and should be installed to the device. +Orchestrator also needs to execute adb commands. For that it uses [special services.](https://github.com/android/android-test/tree/master/services) under the hood. +It's just a shell client and should be installed on the device. ![alt text](../images/orchestrator.png "orchestrator and test-services") @@ -118,7 +119,7 @@ It's just a shell client and should be installed to the device. Despite the fact that it does the job, this solution looks overcomplicated: 1. We need to install +2 different apk to each emulator - 2. We delegate this job to the device instead of host machine. + 2. We delegate this job to the device instead of the host machine.
Devices are less reliable than host pc ##### Other solutions @@ -126,7 +127,7 @@ It's just a shell client and should be installed to the device. It's also possible to clear package data by using [3rd party test runners](https://android-ui-testing.github.io/Cookbook/practices/test_runners_review/), like Marathon, Avito-Runner or Flank. Marathon and Avito-Runner clear package data without an orchestrator. They delegate -this logic to a host machine +this logic to a host machine. ##### Conclusion
@@ -138,17 +139,19 @@ These are pros/cons for an `orchestrator` and 3rd party test runners solution: ➖ Orchestrator — over-complicated
Each `adb pm clear` takes some time and depends on apk size. Below you may see some gaps between the tests which -represent such a delay +represent such a delay. ![alt text](../images/package_clear.png "ADB package clearing takes some time") + +## Suggestion !!! success - Only package clear can guarantee that your data will be celared properly. + Only package clearing can guarantee that the data will be cleared properly between test executions. Marathon and Avito-Runner provide the easiest way to clear application data. - 1. You can set them just by one flag in configuration - 2. They don't use orchestrator under the hood + 1. One simply needs to set a flag in their configuration + 2. They don't use orchestrator under the hood, avoiding its caveats From f94a984d572a0f2efa8f918703b101102bcca2b6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Sergio=20Sastre=20Fl=C3=B3rez?= Date: Tue, 7 Dec 2021 22:20:03 +0100 Subject: [PATCH 02/11] some more corrections --- docs/content/practices/state_clearing.md | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-) diff --git a/docs/content/practices/state_clearing.md b/docs/content/practices/state_clearing.md index fb43ed2..61bd9b6 100644 --- a/docs/content/practices/state_clearing.md +++ b/docs/content/practices/state_clearing.md @@ -72,14 +72,14 @@ them [here](https://github.com/AdevintaSpain/Barista/tree/master/library/src/mai These are pros/cons for both solutions which don't kill the process: -➕ Fast implementation
+➕ Easy implementation. Simply add the corresponding TestRules
➕ Fast execution in the same process

➖ Don't give you any guarantee that your app will be cleared properly
➖ Application or Test process killing will break tests execution
-➖ Can be a bottleneck
+➖ Can be a bottleneck
-Use these solutions only as a temp workaround, because it won't work on perspective in huge projects +Use these solutions only as a temp workaround, because it won't work in long-term for huge projects ### 2. Clearing package data @@ -103,11 +103,11 @@ That's the idea behind of `Orchestrator`. It's an `apk` which only consists of [several classes](https://github.com/android/android-test/tree/master/runner/android_test_orchestrator/java/androidx/test/orchestrator) that run tests and clear data, as described above. -You should install an `orchestrator` along with `application.apk` and `instrumented.apk` on the device. +It is necessary to install an `orchestrator` along with the `application.apk` and the `instrumented.apk` on the device. But that's not all.
-Orchestrator also needs to execute adb commands. For that it uses [special services.](https://github.com/android/android-test/tree/master/services) under the hood. +The Orchestrator also needs to execute adb commands. For that it uses [special services.](https://github.com/android/android-test/tree/master/services) under the hood. It's just a shell client and should be installed on the device. ![alt text](../images/orchestrator.png "orchestrator and test-services") @@ -133,13 +133,15 @@ this logic to a host machine. These are pros/cons for an `orchestrator` and 3rd party test runners solution: -➕ Does the job for us in 100%
+➕ Does 100% of the job for us
+➕ Avoid test failing in cascade due to the application process being killed

-➖ Slow execution _(can take 10+ seconds and depends on apk size)_
-➖ Orchestrator — over-complicated
+➖ Slow execution
+➖ Requires to install extra components — over-complicated:
-Each `adb pm clear` takes some time and depends on apk size. Below you may see some gaps between the tests which -represent such a delay. +The slow execution has 2 sources: +1. The killing and restarting of the process where the test runs. +2. Executing `adb pm clear` after each test takes some time and depends on apk size. Below you may see some gaps between the tests which represent such a delay. ![alt text](../images/package_clear.png "ADB package clearing takes some time") From 166d8d72e71e4d99230afad0b3841048495ebf37 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Sergio=20Sastre=20Fl=C3=B3rez?= Date: Tue, 7 Dec 2021 22:25:30 +0100 Subject: [PATCH 03/11] rewording --- docs/content/practices/state_clearing.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/content/practices/state_clearing.md b/docs/content/practices/state_clearing.md index 61bd9b6..a8224b8 100644 --- a/docs/content/practices/state_clearing.md +++ b/docs/content/practices/state_clearing.md @@ -33,7 +33,7 @@ application: Databases, Files, Preferences and Runtime cache, and should be executed before each test. !!! danger - This solution is a bottleneck and it's better to avoid it at all. If LogoutCleaner is broken, all of the tests will be failed. + This solution is a bottleneck and it's better to avoid it at all. If LogoutCleaner is broken, all of the tests will be failed.
@@ -140,7 +140,7 @@ These are pros/cons for an `orchestrator` and 3rd party test runners solution: ➖ Requires to install extra components — over-complicated:
The slow execution has 2 sources: -1. The killing and restarting of the process where the test runs. +1. The time consumed in killing and restarting the process where each test runs, multiplied by the amount of tests. 2. Executing `adb pm clear` after each test takes some time and depends on apk size. Below you may see some gaps between the tests which represent such a delay. ![alt text](../images/package_clear.png "ADB package clearing takes some time") From 1fb3bf500eb4772a8a7bbd338e1b6cc53655dd89 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Sergio=20Sastre=20Fl=C3=B3rez?= Date: Tue, 7 Dec 2021 22:36:02 +0100 Subject: [PATCH 04/11] better wording for suggestion --- docs/content/practices/state_clearing.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/content/practices/state_clearing.md b/docs/content/practices/state_clearing.md index a8224b8..0545ed3 100644 --- a/docs/content/practices/state_clearing.md +++ b/docs/content/practices/state_clearing.md @@ -146,7 +146,7 @@ The slow execution has 2 sources: ![alt text](../images/package_clear.png "ADB package clearing takes some time") -## Suggestion +## Final conclusion !!! success Only package clearing can guarantee that the data will be cleared properly between test executions. From 88d0741b9297c96903bb5278e35efdfe23bb1299 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Sergio=20Sastre=20Fl=C3=B3rez?= Date: Thu, 9 Dec 2021 00:00:32 +0100 Subject: [PATCH 05/11] Review of "Test Runners review" --- docs/content/practices/test_runners_review.md | 71 ++++++++++--------- 1 file changed, 37 insertions(+), 34 deletions(-) diff --git a/docs/content/practices/test_runners_review.md b/docs/content/practices/test_runners_review.md index b8b4d70..0904c0d 100644 --- a/docs/content/practices/test_runners_review.md +++ b/docs/content/practices/test_runners_review.md @@ -1,64 +1,67 @@ # Test runners -Test runner is responsible for tests run and providing test result for us. +Test runner is a test component responsible for. +1. Preparing the test runs +2. Providing test results -`AndroidJunitRunner` — Official solution and low-level instrument. It requires a lot of effort from engineers to run -tests on CI and make them stable. +`AndroidJunitRunner` — The official solution and low-level instrument. It requires a lot of effort from engineers to run +tests on the CI and make them stable. -It's worth to mention — tools are getting better year by year. However, some basic functionality still doesn't work from -the box properly. +It's worth mentioning that UI testing tools are getting better year by year. However, some basic functionality still doesn't work out +of the box properly. ### 1. Problems with AndroidJunitRunner: -* Overcomplicated solution with clearing +* Overcomplicated solution with state clearing
- _It would be good to have only one flag which does application clearing for us. + _It would be good to do the state clearing of the application by setting a flag.
- It exists, however to have scalability on CI and opportunity to use filters, you still have to - install `test-services.apk` and `orchestrator.apk` to each device manually_ + It exists, however to make it scalable on the CI as well as the opportunity to use filters, you still have to + install `test-services.apk` and `orchestrator.apk` on each device manually_ * Impossibility to scale
- _As soon as you started your tests, it's impossible to connect more devices to tests run on fly_ + _As soon as you started your tests, it's impossible to let more devices join the test execution on the fly_ * Impossibility to prevent flakiness
- _Flakiness is one of the main problems in instrumented testing. Test runner should play role as a latest flakiness - protection level, like support retries from the box or other strategies_ + _Flakiness is one of the main problems in instrumented testing. Test runners should provide some mechanisms to fight against + flakiness, like support to retry failing tests among other strategies_ * Impossibility to validate flakiness
- _Flakiness can be validated by running each test multiple times, and if test pass N/N, it's not flaky. It would be + _Flakiness can be validated by running each test multiple times. If the test passes every single time it runs, it's not flaky. It would be great to launch each test 100 times by one command_ * Poor test report
- _Default test report doesn't show any useful information. As an engineer, I want to see a video of the test, logs and - to make sure that test hasn't been retried. Otherwise, I'd like to see retries and each retry video and logs._ + _The default test report doesn't show enough valuable information for each test. As an engineer, I want to see a video of the test and its logs. Moreover, I'd like to know whether + the test has been retried. If yes, I'd also like to see how many retries, and their corresponding videos and logs_ * Impossibility to retry
- _It's possible to do only via special test rule which does retry for us. However, it's up to test runner to retry each - test, as instrumented process can be crashed and device less reliable than host machine._ + _It's only possible to do it via a special test rule which does the retries for us. However, it's up to the test runner to retry each + test. That's because: instrumented process might have crashed and device less reliable than host machine._ + 1. *The instrumented process might have crashed*. In this case the test rule may not execute the code where the retry actually happens. + 2. **
- _Also, it should be possible to define maximum retry count: Imagine, your application crashed on start. We shouldn't - retry each test in that case because there is no sense to overload build agents on CI._ + _Also, it should be possible to define maximum retry count: Imagine, your application reliably crashes on start, and you have plenty of tests executing that code. We shouldn't + retry each test in that case: we would overload build agents on the CI with tests that are doomed to fail._ * Impossibility to record a video
- _It's possible to achieve and implement manually, however It would be really great to have such functionality from the - box_ + _It's possible to achieve and implement manually. However, it would be really great to have such a functionality already built-in_ -Almost all of that problems possible to solve, but it can take weeks or even months of your time. Beside running tests, +Almost all of those problems can be solved, but it can take weeks or even months of your time. Beside running tests, you also need to care about writing tests which is challenging as well.
-It would be great to have that problems solved from the box +Having those problems solved for you lets you focus on other tasks. ### 2. Open source test runners -All of them used `AndroidJunitRunner` under the hood, as it's the only possibility tun run instrumented tests. +All of them use `AndroidJunitRunner` under the hood, as it's the only possibility to run instrumented tests. #### [:green_square: 2.1 Marathon](https://github.com/MarathonLabs/marathon) @@ -72,25 +75,25 @@ do the whole job for you. ➕ Dynamic test batching (test count/test duration)
➕ Smart retries with a quotas
➕ Screenshots & video out of the box
-➕ Improved test report with video & logs
-➕ Automatically rebalanced test execution if connecting/disconnecting devices on fly
+➕ Improved test report with video & logs
+➕ Automatically rebalanced test execution if connecting/disconnecting devices on the fly
➕ Pull files from the device after test run, e.g. [allure-kotlin](https://github.com/allure-framework/allure-kotlin)
-➕ Basic [Allure](https://github.com/allure-framework) support out of the box
+➕ Basic [Allure](https://github.com/allure-framework) support out of the box
➕ adb client `ddmlib` replacement: [Adam](https://github.com/Malinskiy/adam)
➕ Cross-platform (iOS support)
➕ Fragmented test execution (similar to AOSP's sharding): split large testing suites into multiple CI builds
➕ Parallel execution of parameterised tests
-➕ Interactions with adb/emulator from within a test (e.g. fake fingerprint or GPS)
+➕ Interactions with adb/emulator from within a test (e.g. fake fingerprint or GPS)
➕ Code coverage support
-➕ Testing multi-module projects in one test run
+➕ Testing multi-module projects in one test run
➕ Flakiness fixing mode to verify test passing probability improvements
➖ Doesn't auto-scale devices
_(Marathon will utilise more devices in runtime if some other system connects more to the adb, but marathon itself will not spawn more emulators for you)_
➖ HTML report doesn't contain test retry information (but the Allure report does)
-➖ For complex test executions that solve test flakiness requires an installation of TSDB (InfluxDB or Graphite)
+➖ For complex test executions that solve test flakiness requires to install TSDB (InfluxDB or Graphite)
[Documentation](https://marathonlabs.github.io/marathon/) @@ -100,15 +103,15 @@ Powerful test runner. Works directly with `Kubernetes` ➕ Easy data clearing _(without an Orchestrator)_
➕ Auto-scaling on fly _(There is a coroutine in the background which tries to connect more devices)_ -➕ Retries +➕ Retries ➖ Complicated adoption
-This test runner has been using by Avito company for 4+ years and runs thousands tests every day. It's not as powerful -as Marathon, however it doesn't have an analogue in terms of auto scaling from the box.
+This test runner has been used by Avito for 4+ years and runs thousands of tests every day. It's not as powerful +as Marathon, however it doesn't have an analogue in terms of auto-scaling out of the box.
If you want to run your UI tests on pull requests in a large teams, this test runner is one of the best option. -Engineers from Avito are ready to help with adoption. You can contact to [Dmitriy Voronin](https://github.com/dsvoronin) +Engineers from Avito are ready to help with adoption. You can reach out to [Dmitriy Voronin](https://github.com/dsvoronin) [Documentation](https://avito-tech.github.io/avito-android/test_runner/TestRunner/) From 497d515ac8ea93f38fbba28da277e5bcf89a6e06 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Sergio=20Sastre=20Fl=C3=B3rez?= Date: Fri, 10 Dec 2021 21:53:04 +0100 Subject: [PATCH 06/11] Emulator vs real device review --- .../practices/emulator_vs_real_device.md | 38 +++++++++---------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/docs/content/practices/emulator_vs_real_device.md b/docs/content/practices/emulator_vs_real_device.md index 816115e..3257d2f 100644 --- a/docs/content/practices/emulator_vs_real_device.md +++ b/docs/content/practices/emulator_vs_real_device.md @@ -1,37 +1,37 @@ # Emulator vs Real device - -This question is a trade off and there is no right and wrong answers. We'll review pros/cons and basic emulator setup on -CI +Instrumented tests can run either on emulators or real devices. Which one should we used? +This question is a trade off and there is no right or wrong answer. We'll review the pros and cons of both approaches. ## Real device -Here is pros/cons +Here the pros/cons ➕ Real environment
-➕ Doesn't consume CI resources +➕ Doesn't consume significant RAM, CPU and memory of the CI. -➖ Breaks often
-➖ Requires special conditions
+➖ Breaks often: Battery swells, USB port failures, OS software failing... all this happens because real devices are not designed to be intensively used continuously.
+➖ Requires to place them in a room with special environment conditions
-A real device will help you to catch more bugs from the first perspective, however talking about scalability, if you +Although it seems that a real device is a better alternative because it helps you catch bugs on a full-fledged Android environment, it comes with its own issues. However, talking about scalability, if you have a lot of devices, you need to locate them in a special room with no direct sunlight and with a climate-control. -However, it doesn't save from disk and battery deterioration, because they are always charging and performs I/O -operations. It may be a reason of your tests failing not because of them caught a real bug, but because of an issue with -a device. +But that doesn't prevent them from disk and battery deterioration, because they are always charging and performing I/O +operations. Therefore, if your tests fail, could be because of an issue with +a device and not because of a real bug in the app under test. ## Emulator -Here is pros/cons +Here the pros/cons ➕ Easy configurable
-➕ Can work faster than a real device
+➕ Can work faster than a real device
_Keep in mind that it's achievable only if you applied a special configuration and have powerful build agents_
-➕ Тot demanding in maintenance
+➕ Not demanding in hardware maintenance e.g. battery, disk, USB ports, display...)
-➖ Not a real environment
-➖ Consumes CI resources
+➖ Not the real environment on which the app will end up running
+➖ Consumes significant resources of the CI like RAM, CPU and memory
+➖ Emulators might freeze if stay idle for a long time and need to be restarted.
-The most benefit that we may have is a fresh emulator instance each test bunch. Also, it's possible to create a special -configuration and disable features you don't need to have in tests which affects device stability. However, you need to -have powerful machine (and definitely not one, if you want to run your tests on pull requests) \ No newline at end of file +The main benefit that we may have is a fresh emulator instance on each test run. Also, it's possible to create a special +configuration and disable features you don't need to have in tests, like sensor or audio input/output, which affect device stability. Nevertheless, you need +a powerful machine (and definitely not one, if you want to run your instrumented tests on very pull requests) \ No newline at end of file From 1815a540ed0cd810091579be51f0566129deb6e5 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Sergio=20Sastre=20Fl=C3=B3rez?= Date: Fri, 10 Dec 2021 22:46:45 +0100 Subject: [PATCH 07/11] emulator setup review --- docs/content/practices/emulator_setup.md | 41 ++++++++++++------------ 1 file changed, 21 insertions(+), 20 deletions(-) diff --git a/docs/content/practices/emulator_setup.md b/docs/content/practices/emulator_setup.md index 859dab6..25ecb55 100644 --- a/docs/content/practices/emulator_setup.md +++ b/docs/content/practices/emulator_setup.md @@ -9,10 +9,10 @@ Using docker image is the easiest way, however it's important to understand how ## Creating an emulator -Before starting to read this topic, make sure you've read -an [an official documentation](https://developer.android.com/studio/run/emulator-commandline) +Before starting to read this, make sure you've read +[the official documentation](https://developer.android.com/studio/run/emulator-commandline) -Firstly, you need to create an `ini` configuration: +Firstly, you need to create an `ini` configuration file for the emulator: ```ini PlayStore.enabled=false @@ -48,7 +48,7 @@ skin.name=320x480 disk.dataPartition.size=8G ``` -Pay your attention that we disabled: +Pay attention to what we have disabled: * Accelerometer * Audio input/output @@ -56,11 +56,11 @@ Pay your attention that we disabled: * Sensors:Accelerometer, Humidity, Pressure, Light * Gyroscope -We don't really need them for our tests run. It also may improve our tests performance, because there are no background -operations related to that tasks. +We don't really need them for our test runs. It also may improve our tests performance, because there are no background +operations related to those tasks. -After that, you can run your emulator by `avd manager`, which is a part of android sdk manager. After your device -creation, you need change default generated ini file to custom one. You may see an example below: +After that, you can run your emulator via `avd manager`, which is part of the android sdk manager. After your device +creation, you need to change the default generated `ini` file to a custom one. Take a look at the example below: ```bash function define_android_sdk_environment_if_needed() { @@ -109,21 +109,22 @@ define_path_environment_if_needed create_and_patch_emulator ``` -Pay your attention that you also need to wait until your emulator is fully booted. +Keep in mind that you also need to wait until your emulator is fully booted. Otherwise the tests will fail because there is still no device ready +on which the test can run. ## How to run an emulator in a Docker? -Running an emulator in a docker a way easier than manually, because it encapsulates all this logic. If you don't have an -experience with docker, you can check -[this guide](https://www.youtube.com/watch?v=zJ6WbK9zFpI) to check the basics. +Running an emulator in a docker is way easier than manually, because it encapsulates all this logic. If you don't have +experience with docker, check +[this guide](https://www.youtube.com/watch?v=zJ6WbK9zFpI) to get familiarized with the basics. -There are some popular already built docker images for you: +There are some popular docker images already built for you: * [Official Google emulator](https://github.com/google/android-emulator-container-scripts) * [Agoda emulator](https://github.com/agoda-com/docker-emulator-android) * [Avito emulator](https://hub.docker.com/r/avitotech/android-emulator-29) -Talking about [Avito emulator](https://github.com/google/android-emulator-container-scripts), it also patches your +Talking about the [Avito emulator](https://github.com/google/android-emulator-container-scripts), it also patches your emulator with adb commands to prevent tests flakiness and to speed them up ##### Run Avito emulator @@ -154,18 +155,18 @@ docker rm $(docker ps -a -q) ## Conclusion * Use docker emulators
- _You also will have an opportunity to run them with `Kubernetes`, to make it scalable in the future_ + _You'll also have the opportunity to run them with `Kubernetes`, to make it scalable in the future_ -* Start fresh emulators each test batch and kill them after all of your tests finished
- _Emulators tend to leak and may not work properly after some time_ +* Start fresh emulators on each test batch and kill them after all of your tests finished
+ _Emulators tend to freeze and may not work properly after idling for some time_ -* Use the same emulator as on CI locally
+* Use the same emulator locally as on your CI
_All devices are different. It can save you a lot of time with debugging and understanding why your test works locally - and fails on CI. It won't be possible to run Docker emulator on macOS or Windows, because + and fails on CI. It won't be possible to run Docker emulators on macOS or Windows, because of [haxm#51](https://github.com/intel/haxm/issues/51#issuecomment-389731675). Use AVD to launch them on such machines (script above may help you)_ !!! warning - To run an emulator on CI with a docker, make sure that nested virtualisation supported and KVM installed. + To run an emulator on CI with a docker, make sure that nested virtualisation is supported and KVM is installed. You can check more details [here](https://developer.android.com/studio/run/emulator-acceleration#vm-linux) \ No newline at end of file From 3e9b9aa83631d278b47d5a361e04c360bdeef469 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Sergio=20Sastre=20Fl=C3=B3rez?= Date: Fri, 10 Dec 2021 23:00:16 +0100 Subject: [PATCH 08/11] second review emulator setup --- docs/content/practices/emulator_setup.md | 23 ++++++++++++++--------- 1 file changed, 14 insertions(+), 9 deletions(-) diff --git a/docs/content/practices/emulator_setup.md b/docs/content/practices/emulator_setup.md index 25ecb55..9960a46 100644 --- a/docs/content/practices/emulator_setup.md +++ b/docs/content/practices/emulator_setup.md @@ -1,11 +1,11 @@ # Emulator setup -Basically we have next choices: +Basically we have 2 choices: -* Manage devices automatically by `avd` -* Manage docker containers with emulators by `docker` +* Manage devices automatically via `avd` +* Manage docker containers with emulators with `docker` -Using docker image is the easiest way, however it's important to understand how docker creates device for you. +Using a docker image is the easiest way, however it's important to understand how docker creates emulators for you. ## Creating an emulator @@ -59,8 +59,8 @@ Pay attention to what we have disabled: We don't really need them for our test runs. It also may improve our tests performance, because there are no background operations related to those tasks. -After that, you can run your emulator via `avd manager`, which is part of the android sdk manager. After your device -creation, you need to change the default generated `ini` file to a custom one. Take a look at the example below: +After that, you can run your emulator via `avd manager`, which is part of the android sdk manager. After creating the emulator, you need to switch the default generated `ini` file to the custom one we defined previously. +You can achieve that with a script like this one: ```bash function define_android_sdk_environment_if_needed() { @@ -109,8 +109,13 @@ define_path_environment_if_needed create_and_patch_emulator ``` -Keep in mind that you also need to wait until your emulator is fully booted. Otherwise the tests will fail because there is still no device ready -on which the test can run. +Keep in mind that the emulator must fully boot before running any test. Otherwise the tests will fail because there is still no device ready +on which they can run. + +### Summary +1. create an `ini` configuration file for the emulator +2. run your emulator via `avd manager` and +3. switch the `ini` file generated in 2. with the one we create in 1. ## How to run an emulator in a Docker? @@ -125,7 +130,7 @@ There are some popular docker images already built for you: * [Avito emulator](https://hub.docker.com/r/avitotech/android-emulator-29) Talking about the [Avito emulator](https://github.com/google/android-emulator-container-scripts), it also patches your -emulator with adb commands to prevent tests flakiness and to speed them up +emulator with adb commands to prevent tests flakiness and to speed them up. ##### Run Avito emulator From eab7416a0f7744158e90a584da75b0a24dc2c3ec Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Sergio=20Sastre=20Fl=C3=B3rez?= Date: Fri, 10 Dec 2021 23:02:25 +0100 Subject: [PATCH 09/11] remove unnecessary "a" --- docs/content/practices/emulator_setup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/content/practices/emulator_setup.md b/docs/content/practices/emulator_setup.md index 9960a46..fab077f 100644 --- a/docs/content/practices/emulator_setup.md +++ b/docs/content/practices/emulator_setup.md @@ -117,7 +117,7 @@ on which they can run. 2. run your emulator via `avd manager` and 3. switch the `ini` file generated in 2. with the one we create in 1. -## How to run an emulator in a Docker? +## How to run an emulator in Docker? Running an emulator in a docker is way easier than manually, because it encapsulates all this logic. If you don't have experience with docker, check From 8124fa0129951f3dab6e8e0c182b97ed248ba03d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Sergio=20Sastre=20Fl=C3=B3rez?= Date: Mon, 27 Dec 2021 23:33:01 +0100 Subject: [PATCH 10/11] Review page_object --- docs/content/practices/page_object.md | 53 ++++++++++++++------------- 1 file changed, 27 insertions(+), 26 deletions(-) diff --git a/docs/content/practices/page_object.md b/docs/content/practices/page_object.md index 363133f..3369f03 100644 --- a/docs/content/practices/page_object.md +++ b/docs/content/practices/page_object.md @@ -3,21 +3,21 @@ How to make tests more clear and readable? ## Problem - -There is a lot of `ViewMatchers` and so on in our tests once we need to find exact `View`. +UI test aim to verify the state of the screen state after interacting with its views. +For those interactions to happen, we need to find `ViewMatchers` that are unique to the views we'll interact with.
-Imagine that we do have hundreds of tests that starts with pressing same button. What will be if that button would -change its id? We would change `ViewMatcher` inside every single test. +Imagine that we do have hundreds of tests whose first interaction is pressing the very same button. We define its id to be the `ViewMatcher`. +What if that button would change its id? We'd need to adapt the `ViewMatcher` of that button inside every single test.
-Also there is a problem if our `View` should be accessed with a lot of `ViewMatchers` used (for example when that `View` -is a child of `RecyclerView`) +Moreover, what if our `View` requires numerous `ViewMatchers` to be uniquely identified (for example when that `View` +is a child of `RecyclerView`, where all recyclable vies have the same id)
-What should we do in the above cases? May we should extract this `View` to another abstraction? +What should we do in such cases? Should we put this `View` into another abstraction level? ## Solution: Page Object Pattern -Actually that pattern came to Android world from Web testing. This is how `PageObject` determined by one of its creator: +That pattern came to the Android world from Web testing indeed. This is how `PageObject` defined by one of its creators: > The basic rule of thumb for a page object is that it should allow a software client to do anything and see anything that a human can. It should also provide an interface that's easy to program to and hides the underlying widgetry in the window. So to access a text field you should have accessor methods that take and return a string, check boxes should use booleans, and buttons should be represented by action oriented method names. >
www.martinfowler.com/bliki/PageObject.html @@ -27,7 +27,7 @@ We do have some screen with 3 `Buttons` ![alt text](../images/page_object_example.png "Page object example") -#### Let's write some test for that screen with plain espresso +#### Let's write some test for that screen with plain Espresso ```kotlin @Test @@ -40,9 +40,9 @@ fun testFirstFeature() { } ``` -That test finds one of our button then checks its visibility and after that performs usual click. +That test finds one of our buttons, checks its visibility and after that performs a click. -Main problem here — it's not easy to read. +The main problem here — it's not easy to read. #### What do we want to achieve with PageObject? @@ -56,25 +56,25 @@ fun testFirstFeature() { } ``` -What is the difference we can see here? +What is the difference? -* We use `ViewMatcher` inside of our test
-* We added `MainScreen` abstraction that actually is a `PageObject` of screen provided in example
-* `isVisible()` and `click()` are extensions (for example) +* We do not use `ViewMatcher`s inside our test: they are hidden under our `PageObject`
+* We added `MainScreen` abstraction. It is the `PageObject` of the screen provided in the example
+* `isVisible()` and `click()` are implemented as Kotlin extensions (for example) -As you can see that change made our code more clear and readable. And that happened even with one single test that -checks visibility of button and clicks on it. +As you can see, that change made our code more clear and readable. And that happened even with one single test that +checks the visibility of a button and clicks on it. -Just imagine how much effort that pattern will bring to your codebase in case of hundreds tests written -with `PageObject` +Just imagine how much effort that pattern will bring to your codebase in case of hundreds of tests written +with the `PageObject` ### Instead of writing your own implementation of PageObject pattern -Just take a look for [Kakao library](https://github.com/agoda-com/Kakao) it has a modern `Kotlin DSL` implementation -of `PageObject` pattern +Just take a look at the [Kakao library](https://github.com/agoda-com/Kakao). It has a modern `Kotlin DSL` implementation +of the `PageObject` pattern. -A lot of useful classes for interact with.
-For example, same test for our screen written with `Kakao` library will look like +It comes with a lot of useful classes and actions for View interactions. That's why other libraries like `Kaspresso` use `Kakao` under the hood.
+For example, the same test for our screen written with `Kakao` would look like this ```kotlin @Test @@ -92,6 +92,7 @@ fun testFirstFeature() { `PageObject` pattern helps us to: -➕ Remove duplicates of `ViewMatchers` from tests
-➕ Once we change id/text/whatever of `View` we should change it only in one place of `PageObject` class
-➕ New abstraction to make code more readable and clear \ No newline at end of file +➕ Make the code more readable and clear
+ 1. `ViewMatchers` make the code hard to read, and all of them are now inside of the `PageObject` class. + 2. no more `ViewMatchers` duplication among tests: all Views have their `ViewMatchers` uniquely defined in the `PageObject` class.
+➕ Once we change the id/text/whatever of the target `View` we should change it only in one place: the `PageObject` class
From 3398f944d22db0360e70253f7b6eb5f1ff148f45 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Sergio=20Sastre=20Fl=C3=B3rez?= Date: Fri, 3 Jun 2022 20:35:45 +0200 Subject: [PATCH 11/11] Update flakiness --- docs/content/practices/flakiness.md | 60 ++++++++++++++--------------- 1 file changed, 28 insertions(+), 32 deletions(-) diff --git a/docs/content/practices/flakiness.md b/docs/content/practices/flakiness.md index d221dfb..f8cf67a 100644 --- a/docs/content/practices/flakiness.md +++ b/docs/content/practices/flakiness.md @@ -2,10 +2,10 @@ ![alt text](../images/practices/header_flakiness.svg "Sad") -Flakiness it's an unstable behavior of particular test. If you execute this test N times, it won't pass `N/N`. Or, it -can pass only locally, but always or often failed on the CI. +Flakiness means lack of reliability on a particular test. If you execute this test N times, it won't pass `N/N`. Or, it +might only pass locally, but it often (or always) fails on the CI. -It's the most frustrating problem in instrumented testing, which requires a lot of time from engineers to fight. +Understanding the causes of flakiness is the most frustrating problem in instrumented testing, which requires a lot of time from engineers to fight against. ## Reason @@ -14,33 +14,30 @@ It's the most frustrating problem in instrumented testing, which requires a lot * Test code
`Example: testing toasts/snack-bars` * A real device or Emulator
- `Example: Disk/Battery/Processor/Memory issues or notification has shown on the device` + `Example: Disk/Battery/Processor/Memory issues or notification showed up on the device` * Infrastructure
`Example: Processor/Disk/Memory issues` {== -It's not possible to fight flakiness on 100% if your codebase changes every day (including the new sources of flakiness) +It's not possible to completely eradicate flakiness if your codebase changes every day: every code change can potentially add flakiness. -However, it's possible to reduce it and achieve good percentage of flakiness free. +However, it's possible to reduce it and achieve a good percentage of flakiness free. ==} -In general, to reduce flakiness you need to choose tools like framework for writing, test runner and emulator properly +In general, the key to reduce flakiness is to pick the right tools: **test framework**, **test runner** and **emulator** ## Flakiness protection #### 1. Wait for the content appearing
-: When we have an http request or other asynchronous operation, it's not possible to predict how soon our expected -content will be shown on the screen.
By default, Espresso framework will fail assertion if there is no expected -content in a particular time. +: When a http request or any other asynchronous operation is running, it's not possible to predict how long it takes to return a response to fill our screen with data.
If there is no content on the screen at the time the Espresso assertions occur, the tests will fail.
-Google provided [Idling Resources](https://developer.android.com/training/testing/espresso/idling-resource) to catch +In order to solve this problem, Google provided [Idling Resources](https://developer.android.com/training/testing/espresso/idling-resource) to watch asynchronous operations. -
However, this goes against the common testing best practice of not putting testing code inside your application -code and also requires an additional effort from engineers.
-Recommended by community way it's to use smart-waiting (aka flaky safely algorithm) like this +
However, Idling Resources require putting testing code in production. This goes against the common testing best practices and also requires an additional effort from engineers.
+The recommended means by the community is to use smart-waiting (aka flaky safely algorithm) like this : ```kotlin fun invokeFlakySafely( @@ -73,24 +70,23 @@ fun invokeFlakySafely( } ``` -: This is an internals of +: This algorithm is the foundation of the [Kaspresso library](https://github.com/KasperskyLab/Kaspresso/blob/c8c32004494071e6851d814598199e13c495bf00/kaspresso/src/main/kotlin/com/kaspersky/kaspresso/flakysafety/algorithm/FlakySafetyAlgorithm.kt) -: Official documentation says that it's not a good way to handle this, because of an additional consuming of CPU -resources. However, it's a pragmatic trade-off which speed ui testing writing up and relieves engineers from thinking +: Official documentation says that it's not a good way to handle this, because of additional CPU consumption. However, it's a pragmatic trade-off which speeds up writing of ui tests and relieves engineers from thinking about this problem at all. -: Some frameworks have already implemented solution, which intercepts all assertions: +: Moreover, some frameworks have implemented a system based on exception interceptors: whenever an assertion fails and throws an exception, the framework executes an action (e.g. scroll down) and retries the failing assertion. : * [Avito UI test framework](https://github.com/avito-tech/avito-android/tree/develop/subprojects/android-test/ui-testing-core/src/main/kotlin/com/avito/android) * [Kaspresso](https://github.com/KasperskyLab/Kaspresso) -: Consider using them to avoid this problem at all. +: Consider using them to avoid issues with asynchronous operations. #### 2. Use isolated environment for each test
-: Package clear before each test will all your data in application and process itself. This will get rid of the -likelihood affection old data to your current test. Marathon and Avito-Test runner provide the easiest way to clear the +: Package clear before each test will delete all your data in application and process itself. This will get rid of the +likelihood of old data to affecting your current test. Marathon and Avito-Test runner provide the easiest way to clear the state.
@@ -99,26 +95,26 @@ here: [State Clearing](https://android-ui-testing.github.io/Cookbook/practices/s #### 3. Test vanishing content in other way (Toasts, Snackbars, etc)
-: Testing the content which is going to be hidden after N time (usually ms) it's also challenging. Toast might be shown -properly, but your test framework is checking other content on the screen at the particular moment. When this check is -done, toast might have already been disappeared, your test will be failed. +: Testing the content which is going to be hidden after a certain time (usually ms) it's also challenging. A toast might be shown +properly, but your test framework is checking other content on the screen at that particular moment. When this check is +done and it is time to assert the toast, it might have already disappeared. Therefore, your test will fail. -: To solve this, you may not to test it at all. Or, you can have some proxy object which saves a fact that -Toast/SnackBar has been shown. This solution has already been implemented by Avito company, you may check the +: One way to solve this is not to test it at all. Or, on the other hand, you can have some proxy object which remembers that the +Toast/SnackBar has been shown. This solution has already been implemented by the company Avito, you can check the details [here](https://avito-tech.github.io/avito-android/test/Toast/) -: If you have own designed component, which is also disappears after some time, you can disable this disparity for tests +: If you have your own designed component, which also disappears after some time, you can disable this disparity for tests and close it manually. #### 4. Use special configuration for your device
-: In the most of the cases you don't to have Accelerometer, Audio input/output, Play Store, Sensors and Gyroscope in +: In most cases you don't need the Accelerometer, Audio input/output, Play Store, Sensors and Gyroscope in your tests.
You can see how to disable them here: [Emulator setup](https://android-ui-testing.github.io/Cookbook/practices/emulator_setup/) -: Also, it's recommended way to disable animations on the device, screen-off timeout and long press timeout. The script +: For more reliability, it's also recommended to disable animations on the device, screen-off timeout and long press timeout. The script below will patch all your devices connected to `adb` ```bash devices=$(adb devices -l | sed '1d' | sed '$d' | awk '{print $1}') @@ -138,9 +134,9 @@ done #### 5. Use fresh emulator instance each test batch
-: Your tests may affect your emulator work, like save some information in the external storage, which can be a reason of -flakiness. It's not pragmatic to run a new emulator for each test in terms of speed, however you can do it each batch. -Just kill emulators when all of your tests finished. +: Your tests may affect your emulator work, like saving some information in the external storage, which can be one more reason of +flakiness. It's not pragmatic to run a new emulator for each test in terms of speed, however you can do it for each batch of tests. +Just kill all the emulators once all of your tests finished.
You can see how to disable them here: [Emulator setup](https://android-ui-testing.github.io/Cookbook/practices/emulator_setup/)