Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rewrite integration test in pytest #606

Merged
merged 1 commit into from
Mar 21, 2024
Merged

Rewrite integration test in pytest #606

merged 1 commit into from
Mar 21, 2024

Conversation

martinpitt
Copy link
Member

@martinpitt martinpitt commented Mar 18, 2024

Move it from tasks/ into test/ as it involves more than the tasks container, and could e.g. also grow integration tests for metrics.

Also set up ruff and mypy static code checks, and add a pyproject.toml configuration for these.


@martinpitt martinpitt force-pushed the pytest branch 9 times, most recently from 027dd89 to 8ee38d0 Compare March 18, 2024 13:50
@martinpitt martinpitt marked this pull request as ready for review March 18, 2024 13:51
@martinpitt
Copy link
Member Author

@allisonkarlitskaya This is ready for review now -- there are still a few hacks for pytest_container issues, but upstream is very nice and responsive, so over time we can hopefully clean them up.

Copy link
Member

@allisonkarlitskaya allisonkarlitskaya left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One very quick comment: please add a separate commit to include all of the test dependencies in the tasks container.

@allisonkarlitskaya
Copy link
Member

When running this in a toolbox I get this pretty quickly:

ERROR test/test_deployment.py::test_podman - ValueError: Selected runtime podman does not exist on the system

Even though I have a (script wrapper) podman in my path.

This would be a fairly serious regression...

@martinpitt
Copy link
Member Author

martinpitt commented Mar 18, 2024

One very quick comment: please add a separate commit to include all of the test dependencies in the tasks container.

I can split it up for the previously existing pyflakes/pycodestyle, but the pytest_container, ruff, and mypy stuff is definitively new and should be in the same commit.

ValueError: Selected runtime podman does not exist on the system

Ah right! That's another quirk of pytest_container which I should report -- sudo dnf install buildah will fix that (but it shouldn't need that as we don't build anything). I reported that here: dcermak/pytest_container#191

@martinpitt

This comment was marked as outdated.

@martinpitt
Copy link
Member Author

@allisonkarlitskaya I fixed the test to be parallel friendly. I can now run two pytest simultaneously.

@martinpitt martinpitt force-pushed the pytest branch 3 times, most recently from bef8a31 to d14efe9 Compare March 20, 2024 04:54
@martinpitt
Copy link
Member Author

@allisonkarlitskaya I added a separate FIXUP to use pytest_container from main. That makes things much nicer. If you object to that, I can drop it, otherwise squash.

@martinpitt
Copy link
Member Author

I rewrote this without pytest_container at last. It avoids dependencies, actually makes things simpler (as we don't use the advanced capabilities of pytest_container like building containers on the fly), avoids a few hacks especially around PULL_ALWAYS, and even reduces the PR by 40 lines.

Initially pushed as separate fixup in commit 2593b1e , just for archival purposes. I want to land this version, so I'll squash it now and adjust the documentation.

@martinpitt martinpitt force-pushed the pytest branch 3 times, most recently from 1c5a389 to 4148c63 Compare March 20, 2024 08:43
@martinpitt
Copy link
Member Author

@allisonkarlitskaya I fixed parallel runs -- restoring the podman socket permissions on teardown isn't compatible with parallel tests.

@martinpitt
Copy link
Member Author

In installed python3-pytest-xdist and pytest -nauto works now, with some timeout adjustments

@martinpitt
Copy link
Member Author

One very quick comment: please add a separate commit to include all of the test dependencies in the tasks container.

Obsolete now -- all of them (pytest, ruff, mypy) are already in tasks.

Copy link
Member

@allisonkarlitskaya allisonkarlitskaya left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks awesome, both compared to the shell version, and the iterations that came before.

For me, the most uncomfortable part about your usage of pytest is how you have a bunch of fixtures which cause side-effects in their environment, but don't return anything. You then use those fixtures from your test functions (while ignoring their values).

I've given a few comments about ways in which I'd change that.

sudo apt-get install -y make python3-pyflakes python3-pycodestyle
sudo apt-get install -y make python3-pyflakes python3-pycodestyle python3-pip python3-pytest
# `pip install .[test]` does not work properly on Ubuntu 22.04
sudo pip install ruff mypy types-PyYAML
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd almost have preferred an ignore on import yaml, but this doesn't hurt anybody, I guess...

The use of yaml in the test is so trivial. I wonder if we could just avoid it entirely. I guess you want to make a point of testing the deployment, to the extent possible, though...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, it's the same code as the actual deployment, and dissecting a k8s resource isn't that trivial with just string functions. I'm ok with ignoring yaml for mypy, but it seemed nice to cover, as that types lib exists.

Makefile Outdated
@@ -6,6 +6,8 @@ all:
check:
python3 -m pyflakes tasks tasks/container/webhook
python3 -m pycodestyle --max-line-length=120 --ignore=E722 tasks tasks/container/webhook
if command -v mypy >/dev/null && pip show types-PyYAML >/dev/null; then mypy test; else echo "SKIP: mypy or types-PyYAML not installed"; fi
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This annoys me more. I don't think we should skip mypy because we don't have the yaml types installed...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that's something I'd like to do as a follow-up, or prerequisite. It involves a tasks container refresh to add it there, as the integration test runs itself inside of the tasks container, and would fail without it. I sent #609, if/when that lands I can drop the pip show part.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, # type: ignore[import-untyped] DTRT, and still typechecks if the types are actually installed. So I went with that.


This will run tests-scan/tests-trigger on the given PR and trigger an
[unit-tests](../.cockpit-ci/run) test which simply does `make check`.

You can get an interactive shell with

tasks/run-local.sh -i
pytest -sm shell
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is seriously l33t.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The addopts="-m 'not shell'" this is a bit less l33t. I guess there's no better way to do that...

If we're going to do that anyway, why do we need the marker? The name of the test would be enough on its own and then you could use -k shell. I don't mind either way, though.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another option: you add a bunch of commandline arguments via conftest. You could do --shell mode there.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pytest.mark is the only way that I found how to properly decorate a test to skip itself. You don't have access to the pytestconfig fixture (which is a runtime concept) in a decorator. I suppose I can convert it from a decorator into a procedural call though. But then I'd have to add this check to all other tests (to skip themselves in --shell mode), which I find super ugly.

So in essence, I need an elegant way to say "discover/run these tests by default", but with some special invocation only run test_shell; -k only gives me half of that.

If you have a better idea, I'm all ears! I don't like the "not shell" hack either, but I don't know a better way.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can implement commandline argument parsing via plugins and you can modify test collection via plugins. I feel like that's all you need?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, thanks for the hint! That goes to the pilot board as improvement then. I'll probably look at it in the next days, I want to learn this.


# we want to have useful pod/container names for interactive debugging and log dumping, but still allow
# parallel tests (with e.g. xdist), so disambiguate them with the pid
test_instance = str(os.getpid())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I love this new approach. Thanks for that.



@pytest.fixture(scope='session')
def pod(config: Config, pytestconfig) -> Generator[PodData, None, None]:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better written Iterator[PodData] imho (or was it Iterable?), but it's a matter of taste.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, that's nicer indeed! I was following mypy's recommendation without thinking about it too much. Changed to Iterator.


yield f'export GITHUB_API={GHAPI_URL_POD}; SHA={bots_sha}'

exec_c_out(pod.tasks, f'while kill {mock_pid}; do sleep 0.1; done')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why can't we just create a subprocess object and use the normal .kill() function on it when we're done?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can, but then it doesn't fit nicely into the exec_c{,_out}() helpers any more, with subprocess.run(), plus it makes the whole podman exec stick around. But I rewrote it using Popen() now, and it does look quite a bit nicer.

def mock_git_push(pod: PodData) -> Generator[None, None, None]:
"""Install tasks/mock-git-push"""

subprocess.run(['podman', 'cp', str(ROOT_DIR / 'tasks/mock-git-push'), f'{pod.tasks}:/usr/local/bin/git'],
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if we can just include this in the pod setup? Is there ever a time we want to push "for real"?

The interaction of one fixture implicitly modifying the state of another fixture (and then returning None for itself) seems a bit magic to me.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, moved to the setup.

generate_config(config,
forge_opts=f'api-url = "{GHAPI_URL_POD}"',
run_args=f'"--pod={pod.pod}", "--env=GITHUB_API={GHAPI_URL_POD}"')
return None # silence ruff PT004
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd much rather these returned a path object which we could pass to job-runner -F...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But then we'd diverge from what happens in the actual deployment, where we don't use -F, and instead point to it via env variable. I can return the path of course, but it's not actually going to use it. It looks a bit nicer, though.

martinpitt added a commit that referenced this pull request Mar 21, 2024
This is a prerequisite for proper mypy testing of the upcoming pytest in
PR #606
Move it from tasks/ into test/ as it involves more than the tasks
container, and could e.g. also grow integration tests for metrics.

Also set up ruff and mypy static code checks, and add a pyproject.toml
configuration for these.
@martinpitt
Copy link
Member Author

@allisonkarlitskaya I addressed most of your comments, I just don't have a better idea about the -sm shell, see the discussion above.

Copy link
Member

@allisonkarlitskaya allisonkarlitskaya left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm happy enough with this now. We can address remaining issues/tweaks as follow-ups :)

This is a very nice improvement. I'm glad you were inspired to work on it ✨

martinpitt added a commit that referenced this pull request Mar 21, 2024
This is a prerequisite for proper mypy testing of the upcoming pytest in
PR #606
@martinpitt
Copy link
Member Author

Ack! Let's see how this works in practice. I'll work on a corresponding bots change to use the new test there also, otherwise we break PRs. I added the "shell hack" thing to the pilot board.

@martinpitt martinpitt merged commit 32b20aa into main Mar 21, 2024
3 checks passed
@martinpitt martinpitt deleted the pytest branch March 21, 2024 16:13
martinpitt added a commit to cockpit-project/bots that referenced this pull request Mar 21, 2024
martinpitt added a commit that referenced this pull request Mar 21, 2024
This is a prerequisite for proper mypy testing of the upcoming pytest in
PR #606
martinpitt added a commit that referenced this pull request Mar 21, 2024
This is a prerequisite for proper mypy testing of the upcoming pytest in
PR #606
allisonkarlitskaya pushed a commit to cockpit-project/bots that referenced this pull request Mar 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants