Skip to content

Commit

Permalink
docs: testing.rst
Browse files Browse the repository at this point in the history
  • Loading branch information
kdmccormick committed Dec 13, 2024
1 parent 27fa475 commit a92cdac
Showing 1 changed file with 79 additions and 194 deletions.
273 changes: 79 additions & 194 deletions docs/concepts/testing/testing.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
#######
Testing
#######

Expand All @@ -7,7 +6,7 @@ Testing
:depth: 3

Overview
========
********

We maintain two kinds of tests: unit tests and integration tests.

Expand All @@ -26,10 +25,10 @@ tests. Most of our tests are unit tests or
integration tests.

Test Types
----------
==========

Unit Tests
~~~~~~~~~~
----------

- Each test case should be concise: setup, execute, check, and
teardown. If you find yourself writing tests with many steps,
Expand All @@ -49,7 +48,7 @@ Unit Tests


Integration Tests
~~~~~~~~~~~~~~~~~
-----------------

- Test several units at the same time. Note that you can still mock or patch
dependencies that are not under test! For example, you might test that
Expand All @@ -67,7 +66,7 @@ Integration Tests
.. _Django test client: https://docs.djangoproject.com/en/dev/topics/testing/overview/

Test Locations
--------------
==============

- Python unit and integration tests: Located in subpackages called
``tests``. For example, the tests for the ``capa`` package are
Expand All @@ -80,14 +79,29 @@ Test Locations
the test for ``src/views/module.js`` should be written in
``spec/views/module_spec.js``.

Running Tests
=============
Factories
=========

Many tests delegate set-up to a "factory" class. For example, there are
factories for creating courses, problems, and users. This encapsulates
set-up logic from tests.

Factories are often implemented using `FactoryBoy`_.

**Unless otherwise mentioned, all the following commands should be run from inside the lms docker container.**
In general, factories should be located close to the code they use. For
example, the factory for creating problem XML definitions is located in
``xmodule/capa/tests/response_xml_factory.py`` because the
``capa`` package handles problem XML.

.. _FactoryBoy: https://readthedocs.org/projects/factoryboy/

Running Python Unit tests
-------------------------
*************************

The following commands need to be run within a Python environment in
which requirements/edx/testing.txt has been installed. If you are using a
Docker-based Open edX distribution, then you probably will want to run these
commands within the LMS and/or CMS Docker containers.

We use `pytest`_ to run Python tests. Pytest is a testing framework for python and should be your goto for local Python unit testing.

Expand All @@ -97,7 +111,7 @@ Pytest (and all of the plugins we use with it) has a lot of options. Use `pytest


Running Python Test Subsets
~~~~~~~~~~~~~~~~~~~~~~~~~~~
===========================

When developing tests, it is often helpful to be able to really just run one single test without the overhead of PIP installs, UX builds, etc.

Expand Down Expand Up @@ -139,7 +153,7 @@ Various tools like ddt create tests with very complex names, rather than figurin
pytest xmodule/tests/test_stringify.py --collectonly

Testing with migrations
***********************
-----------------------

For the sake of speed, by default the python unit test database tables
are created directly from apps' models. If you want to run the tests
Expand All @@ -149,7 +163,7 @@ against a database created by applying the migrations instead, use the
pytest test --create-db --migrations

Debugging a test
~~~~~~~~~~~~~~~~
----------------

There are various ways to debug tests in Python and more specifically with pytest:

Expand All @@ -173,7 +187,7 @@ There are various ways to debug tests in Python and more specifically with pytes


How to output coverage locally
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
==============================

These are examples of how to run a single test and get coverage::

Expand Down Expand Up @@ -221,211 +235,82 @@ run one of these commands::


Debugging Unittest Flakiness
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
============================

As we move over to running our unittests with Jenkins Pipelines and pytest-xdist,
there are new ways for tests to flake, which can sometimes be difficult to debug.
If you run into flakiness, check (and feel free to contribute to) this
`confluence document <https://openedx.atlassian.net/wiki/spaces/TE/pages/884998163/Debugging+test+failures+with+pytest-xdist>`__ for help.
See this `confluence document <https://openedx.atlassian.net/wiki/spaces/TE/pages/884998163/Debugging+test+failures+with+pytest-xdist>`_.

Running Javascript Unit Tests
-----------------------------
Running JavaScript Unit Tests
*****************************

Before running Javascript unit tests, you will need to be running Firefox or Chrome in a place visible to edx-platform.
If you are using Tutor Dev to run edx-platform, then you can do so by installing and enabling the
``test-legacy-js`` plugin from `openedx-tutor-plugins`_, and then rebuilding
the ``openedx-dev`` image:

.. code-block:: python
the ``openedx-dev`` image::

tutor plugins install https://github.com/openedx/openedx-tutor-plugins/tree/main/plugins/tutor-contrib-test-legacy-js
tutor plugins enable test-legacy-js
tutor images build openedx-dev

.. _openedx-tutor-plugins: https://github.com/openedx/openedx-tutor-plugins/

We use Jasmine to run JavaScript unit tests. To run all the JavaScript
tests::

npm run test

To run a specific set of JavaScript tests and print the results to the
console, run these commands::

npm run test-cms-vanilla
npm run test-cms-require
npm run test-cms-webpack
npm run test-lms-webpack
npm run test-xmodule-vanilla
npm run test-xmodule-webpack
npm run test-common-vanilla
npm run test-common-require

Testing internationalization with dummy translations
----------------------------------------------------

Any text you add to the platform should be internationalized. To generate translations for your new strings, run the following command::

paver i18n_dummy

This command generates dummy translations for each dummy language in the
platform and puts the dummy strings in the appropriate language files.
You can then preview the dummy languages on your local machine and also in your sandbox, if and when you create one.

The dummy language files that are generated during this process can be
found in the following locations::

conf/locale/{LANG_CODE}

There are a few JavaScript files that are generated from this process. You can find those in the following locations::

lms/static/js/i18n/{LANG_CODE}
cms/static/js/i18n/{LANG_CODE}

Do not commit the ``.po``, ``.mo``, ``.js`` files that are generated
in the above locations during the dummy translation process!

Test Coverage and Quality
-------------------------

Viewing Test Coverage
~~~~~~~~~~~~~~~~~~~~~

We currently collect test coverage information for Python
unit/integration tests.

To view test coverage:

1. Run the test suite with this command::

paver test

2. Generate reports with this command::

paver coverage

3. Reports are located in the ``reports`` folder. The command generates
HTML and XML (Cobertura format) reports.

Python Code Style Quality
~~~~~~~~~~~~~~~~~~~~~~~~~
We use Jasmine (via Karma) to run most JavaScript unit tests. We use Jest to
run a small handful of additional JS unit tests. You can use the ``npm run
test*`` commands to run them::

To view Python code style quality (including PEP 8 and pylint violations) run this command::
npm run test-karma # Run all Jasmine+Karma tests.
npm run test-jest # Run all Jest tests.
npm run test # Run both of the above.

paver run_quality
The Karma tests are further broken down into three types depending on how the
JavaScript it is testing is built::

More specific options are below.
npm run test-karma-vanilla # Our very oldest JS, which doesn't even use RequireJS
npm run test-karma-require # Old JS that uses RequireJS
npm run test-karma-webpack # Slightly "newer" JS which is built with Webpack

- These commands run a particular quality report::
Unfortunately, at the time of writing, the build for the ``test-karma-webpack``
tests is broken. The tests are excluded from ``npm run test-karma`` as to not
fail CI. We `may fix this one day`_.

paver run_pep8
paver run_pylint
.. _may fix this one day: https://github.com/openedx/edx-platform/issues/35956

- This command runs a report, and sets it to fail if it exceeds a given number
of violations::
To run all Karma+Jasmine tests for a particular top-level edx-platform folder,
you can run::

paver run_pep8 --limit=800
npm run test-cms
npm run test-lms
npm run test-xmodule
npm run test-common

- The ``run_quality`` uses the underlying diff-quality tool (which is packaged
with `diff-cover`_). With that, the command can be set to fail if a certain
diff threshold is not met. For example, to cause the process to fail if
quality expectations are less than 100% when compared to master (or in other
words, if style quality is worse than what is already on master)::

paver run_quality --percentage=100

- Note that 'fixme' violations are not counted with run\_quality. To
see all 'TODO' lines, use this command::

paver find_fixme --system=lms

``system`` is an optional argument here. It defaults to
``cms,lms,common``.

.. _diff-cover: https://github.com/Bachmann1234/diff-cover


JavaScript Code Style Quality
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

To view JavaScript code style quality run this command::

paver run_eslint

- This command also comes with a ``--limit`` switch, this is an example of that switch::

paver run_eslint --limit=50000


Code Complexity Tools
=====================

Tool(s) available for evaluating complexity of edx-platform code:


- `plato <https://github.com/es-analysis/plato>`__ for JavaScript code
complexity. Several options are available on the command line; see
documentation. Below, the following command will produce an HTML report in a
subdirectory called "jscomplexity"::

plato -q -x common/static/js/vendor/ -t common -e .eslintrc.json -r -d jscomplexity common/static/js/

Other Testing Tips
==================

Connecting to Browser
---------------------

If you want to see the browser being automated for JavaScript,
you can connect to the container running it via VNC.

+------------------------+----------------------+
| Browser | VNC connection |
+========================+======================+
| Firefox (Default) | vnc://0.0.0.0:25900 |
+------------------------+----------------------+
| Chrome (via Selenium) | vnc://0.0.0.0:15900 |
+------------------------+----------------------+

On macOS, enter the VNC connection string in Safari to connect via VNC. The VNC
passwords for both browsers are randomly generated and logged at container
startup, and can be found by running ``make vnc-passwords``.

Most tests are run in Firefox by default. To use Chrome for tests that normally
use Firefox instead, prefix the test command with
``SELENIUM_BROWSER=chrome SELENIUM_HOST=edx.devstack.chrome``

Factories
---------

Many tests delegate set-up to a "factory" class. For example, there are
factories for creating courses, problems, and users. This encapsulates
set-up logic from tests.

Factories are often implemented using `FactoryBoy`_.

In general, factories should be located close to the code they use. For
example, the factory for creating problem XML definitions is located in
``xmodule/capa/tests/response_xml_factory.py`` because the
``capa`` package handles problem XML.

.. _FactoryBoy: https://readthedocs.org/projects/factoryboy/
Finally, if you want to pass any options to the underlying ``node`` invocation
for Karma+Jasmine tests, you can run one of these specific commands, and put
your arguments after the ``--`` separator::

Running Tests on Paver Scripts
------------------------------
npm run test-cms-vanilla -- --your --args --here
npm run test-cms-require -- --your --args --here
npm run test-cms-webpack -- --your --args --here
npm run test-lms-webpack -- --your --args --here
npm run test-xmodule-vanilla -- --your --args --here
npm run test-xmodule-webpack -- --your --args --here
npm run test-common-vanilla -- --your --args --here
npm run test-common-require -- --your --args --here

To run tests on the scripts that power the various Paver commands, use the following command::

pytest pavelib
Code Quality
************

Testing using queue servers
---------------------------
We use several tools to analyze code quality. The full set of them is::

When testing problems that use a queue server on AWS (e.g.
sandbox-xqueue.edx.org), you'll need to run your server on your public IP, like so::
mypy $PATHS...
pycodestyle $PATHS...
pylint $PATHS...
lint-imports
scripts/verify-dunder-init.sh
make xsslint
make pii_check
make check_keywords
npm run lint

./manage.py lms runserver 0.0.0.0:8000
Where ``$PATHS...`` is a list of folders and files to analyze, or nothing if
you would like to analyze the entire codebase (which can take a while).

When you connect to the LMS, you need to use the public ip. Use
``ifconfig`` to figure out the number, and connect e.g. to
``http://18.3.4.5:8000/``

0 comments on commit a92cdac

Please sign in to comment.