Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add integration tests #331

Merged
merged 10 commits into from
Jan 23, 2025

Conversation

MaferMazu
Copy link
Collaborator

@MaferMazu MaferMazu commented Dec 26, 2024

Description (What does it do?)

This PR adds a workflow that validates functionality based on the named releases.

Based on the library documentation:

The mitxgraders Python library provides a number of configurable Python classes that can be used as graders in edX custom response problems.

So, this PR launches a Tutor env with codejail and runs a script containing the Python classes used by this library.

This PR only tests that you can use the principal classes, but it doesn't test the correctness of each class; for that, we have the unit tests.

Important

Adding this workflow increases the execution time.
The unit test runs in seconds, and this integration test runs in ~9 minutes.

Note

This test shows that the python_lin.zip file works as expected in a Tutor environment. If the python_lin.zip doesn't change in a PR, the changes in that PR won't be tested.

Detailed changes:

  • Added a GitHub action that launches a Tutor env with codejail and imports the MIT course. (The MIT course uses the python_lib.zip file from the branch of a PR)

How can this be tested?

@MaferMazu MaferMazu force-pushed the mfmz/integration-tests-by-release branch from 0399594 to dac1871 Compare December 26, 2024 22:25
@MaferMazu MaferMazu marked this pull request as ready for review December 27, 2024 22:24
@MaferMazu
Copy link
Collaborator Author

If the execution time is not a problem, we can maintain this as it is, but if not, we can run the integration test only if the PR has a specific label. What do you think @blarghmatey?

.github/workflows/integration-test.yaml Outdated Show resolved Hide resolved
.github/workflows/integration-test.yaml Show resolved Hide resolved
.github/workflows/integration-test.yaml Outdated Show resolved Hide resolved
integration_tests/integration_test.py Outdated Show resolved Hide resolved
integration_tests/integration_test.py Outdated Show resolved Hide resolved
log = logging.getLogger(__name__)

# Define the code to be executed
all_code = """

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
all_code = """
GRADING_CLASSES_CODE = """

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What would be the easiest way to maintain this?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can use something like the following:

from mitxgraders import *

StringGrader()
FormulaGrader()
NumericalGrader()
MatrixGrader()
.
.
.

or

from mitxgraders import (
    StringGrader,
    FormulaGrader,
    NumericalGrader,
    MatrixGrader
)

That could test that we can use the Grader functions we want.

I added more context to the code to cover a little more in the tests, using some examples from the unit tests. However, this may not be necessary since it has already been tested in the unit tests. We can test only the fact that we can use the graders. I think this could be totally up to @blarghmatey.

@blarghmatey, do you prefer a code with some examples but with more maintenance or only a test that we can use the graders, which is the main functionality, with less maintenance, and leave the rest to the unit test?

Thanks, @mariajgrimaldi, for that observation.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have been thinking about this, and the best approach is to test with the import and leave the details for the unit tests.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd suggest running just a MatrixGrader. That's the class that uses the most external dependencies (scipy, numpy). I agree that so long as those are working, the unit tests can handle the rest.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that makes sense. The main purpose for the integration test is to make sure that we can be aware of any regressions due to underlying dependency changes.

Copy link
Collaborator Author

@MaferMazu MaferMazu Jan 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@blarghmatey, for dependency changes, the unit test we are running in python3.8 y 3.11 should be enough because we are using in the test the edx-sandbox requirements that codejail uses for redwood and sumac, respectively.

We are running an environment for this integration test to see if we can use the library as expected in a production environment.

integration_tests/integration_test.py Outdated Show resolved Hide resolved
@MaferMazu MaferMazu force-pushed the mfmz/integration-tests-by-release branch from 3a681bd to 4a0f1ab Compare December 31, 2024 00:42
@MaferMazu MaferMazu force-pushed the mfmz/integration-tests-by-release branch from 4a0f1ab to 5dd6748 Compare December 31, 2024 01:15
@blarghmatey
Copy link
Member

In terms of execution time, it's not a major problem if it's slow. That being said, we can also just run it on a ~monthly cadence rather than for every pull request.

@MaferMazu MaferMazu force-pushed the mfmz/integration-tests-by-release branch from 72549e2 to df52cce Compare January 2, 2025 17:52
@MaferMazu
Copy link
Collaborator Author

@blarghmatey @jolyonb @mariajgrimaldi this is ready for a re-review. ✨

Copy link

@mariajgrimaldi mariajgrimaldi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for addressing all my comments!

@MaferMazu
Copy link
Collaborator Author

@blarghmatey @jolyonb @pdpinch, I just wanted to follow up on this.

Can you review this PR?

@MaferMazu
Copy link
Collaborator Author

Thanks for your feedback. I'll merge this, considering that the comments were addressed and we received feedback in Slack that everything looks good.

@blarghmatey @jolyonb @pdpinch If there is something left, let me know.

Note: this PR doesn't need a new release because it only creates a GitHub workflow.

@MaferMazu MaferMazu merged commit 9fa5d02 into mitodl:master Jan 23, 2025
4 checks passed
@MaferMazu MaferMazu mentioned this pull request Jan 30, 2025
2 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants