-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add integration tests #331
feat: add integration tests #331
Conversation
0399594
to
dac1871
Compare
If the execution time is not a problem, we can maintain this as it is, but if not, we can run the integration test only if the PR has a specific label. What do you think @blarghmatey? |
log = logging.getLogger(__name__) | ||
|
||
# Define the code to be executed | ||
all_code = """ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
all_code = """ | |
GRADING_CLASSES_CODE = """ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What would be the easiest way to maintain this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can use something like the following:
from mitxgraders import *
StringGrader()
FormulaGrader()
NumericalGrader()
MatrixGrader()
.
.
.
or
from mitxgraders import (
StringGrader,
FormulaGrader,
NumericalGrader,
MatrixGrader
)
That could test that we can use the Grader functions we want.
I added more context to the code to cover a little more in the tests, using some examples from the unit tests. However, this may not be necessary since it has already been tested in the unit tests. We can test only the fact that we can use the graders. I think this could be totally up to @blarghmatey.
@blarghmatey, do you prefer a code with some examples but with more maintenance or only a test that we can use the graders, which is the main functionality, with less maintenance, and leave the rest to the unit test?
Thanks, @mariajgrimaldi, for that observation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have been thinking about this, and the best approach is to test with the import and leave the details for the unit tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd suggest running just a MatrixGrader. That's the class that uses the most external dependencies (scipy, numpy). I agree that so long as those are working, the unit tests can handle the rest.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that makes sense. The main purpose for the integration test is to make sure that we can be aware of any regressions due to underlying dependency changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@blarghmatey, for dependency changes, the unit test we are running in python3.8 y 3.11 should be enough because we are using in the test the edx-sandbox
requirements that codejail uses for redwood and sumac, respectively.
We are running an environment for this integration test to see if we can use the library as expected in a production environment.
3a681bd
to
4a0f1ab
Compare
4a0f1ab
to
5dd6748
Compare
In terms of execution time, it's not a major problem if it's slow. That being said, we can also just run it on a ~monthly cadence rather than for every pull request. |
72549e2
to
df52cce
Compare
@blarghmatey @jolyonb @mariajgrimaldi this is ready for a re-review. ✨ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for addressing all my comments!
@blarghmatey @jolyonb @pdpinch, I just wanted to follow up on this. Can you review this PR? |
Thanks for your feedback. I'll merge this, considering that the comments were addressed and we received feedback in Slack that everything looks good. @blarghmatey @jolyonb @pdpinch If there is something left, let me know. Note: this PR doesn't need a new release because it only creates a GitHub workflow. |
Description (What does it do?)
This PR adds a workflow that validates functionality based on the named releases.
Based on the library documentation:
So, this PR launches a Tutor env with codejail and runs a script containing the Python classes used by this library.
This PR only tests that you can use the principal classes, but it doesn't test the correctness of each class; for that, we have the unit tests.
Important
Adding this workflow increases the execution time.
The unit test runs in seconds, and this integration test runs in ~9 minutes.
Note
This test shows that the
python_lin.zip
file works as expected in a Tutor environment. If thepython_lin.zip
doesn't change in a PR, the changes in that PR won't be tested.Detailed changes:
How can this be tested?