Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add integration tests #331

Merged
merged 10 commits into from
Jan 23, 2025
52 changes: 52 additions & 0 deletions .github/workflows/integration-test.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
name: Integration Test

on: [pull_request]

jobs:
edx-platform-integration-test:
name: Integration with Tutor
strategy:
matrix:
# Open edX Version: Sumac
tutor_version: ["<20.0.0"]
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
path: mitx-grading-library

- name: Adjust permissions to execute Tutor commands
run: |
chmod 777 . -R
shell: bash

- name: Set Tutor environment variables
run: |
cat <<EOF >> "$GITHUB_ENV"
LMS_HOST=local.edly.io
CMS_HOST=studio.local.edly.io
TUTOR_ROOT=$(pwd)
EOF
shell: bash

- name: Install and prepare Tutor, Codejail and tests
run: |
pip install "tutor${{ matrix.tutor_version }}"
pip install git+https://github.com/edunext/tutor-contrib-codejail
tutor config save
tutor plugins enable codejail
tutor local do init --limit codejail
MaferMazu marked this conversation as resolved.
Show resolved Hide resolved
tutor mounts add cms:mitx-grading-library/integration_tests/integration_test.py:/openedx/edx-platform/integration_test.py
tutor local launch -I
shell: bash
MaferMazu marked this conversation as resolved.
Show resolved Hide resolved

- name: Import MITx Demo Course
run: |
tutor local do importdemocourse -r ${{ github.event.pull_request.head.repo.clone_url }} -d course -v ${{ github.event.pull_request.head.ref }}
shell: bash

- name: Run integration tests
run: |
tutor local run cms python3 integration_test.py
MaferMazu marked this conversation as resolved.
Show resolved Hide resolved
shell: bash
131 changes: 131 additions & 0 deletions integration_tests/integration_test.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,131 @@
"""
This script is used to test the integration of the mitx-graders library with the Open edX platform.
Running a code that uses the functions provided by the library using the safe_exec function, and in the MIT course context,
to be able to use the python_lib.zip that contains the library.
"""

import os
import django
import logging
from xmodule.capa.safe_exec import safe_exec
from xmodule.util.sandboxing import SandboxService
from xmodule.contentstore.django import contentstore
from opaque_keys.edx.keys import CourseKey

# Set up Django environment
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "cms.envs.test")
django.setup()

log = logging.getLogger(__name__)

# Define the code to be executed
all_code = """

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
all_code = """
GRADING_CLASSES_CODE = """

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What would be the easiest way to maintain this?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can use something like the following:

from mitxgraders import *

StringGrader()
FormulaGrader()
NumericalGrader()
MatrixGrader()
.
.
.

or

from mitxgraders import (
    StringGrader,
    FormulaGrader,
    NumericalGrader,
    MatrixGrader
)

That could test that we can use the Grader functions we want.

I added more context to the code to cover a little more in the tests, using some examples from the unit tests. However, this may not be necessary since it has already been tested in the unit tests. We can test only the fact that we can use the graders. I think this could be totally up to @blarghmatey.

@blarghmatey, do you prefer a code with some examples but with more maintenance or only a test that we can use the graders, which is the main functionality, with less maintenance, and leave the rest to the unit test?

Thanks, @mariajgrimaldi, for that observation.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have been thinking about this, and the best approach is to test with the import and leave the details for the unit tests.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd suggest running just a MatrixGrader. That's the class that uses the most external dependencies (scipy, numpy). I agree that so long as those are working, the unit tests can handle the rest.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that makes sense. The main purpose for the integration test is to make sure that we can be aware of any regressions due to underlying dependency changes.

Copy link
Collaborator Author

@MaferMazu MaferMazu Jan 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@blarghmatey, for dependency changes, the unit test we are running in python3.8 y 3.11 should be enough because we are using in the test the edx-sandbox requirements that codejail uses for redwood and sumac, respectively.

We are running an environment for this integration test to see if we can use the library as expected in a production environment.

from mitxgraders import *

# Grading Classes

## Single-input graders

StringGrader(answers="cat")
FormulaGrader(answers='0')
NumericalGrader()
MatrixGrader(
answers='x*A*B*u + z*C^3*v/(u*C*v)',
variables=['A', 'B', 'C', 'u', 'v', 'z', 'x'],
sample_from={
'A': RealMatrices(shape=[2, 3]),
'B': RealMatrices(shape=[3, 2]),
'C': RealMatrices(shape=[2, 2]),
'u': RealVectors(shape=[2]),
'v': RealVectors(shape=[2]),
'z': ComplexRectangle()
},
identity_dim=2
)
SingleListGrader(
answers=['cat', 'dog', 'unicorn'],
subgrader=StringGrader()
)

## Multi-input graders

ListGrader(
answers=['cat', 'dog', 'unicorn'],
subgraders=StringGrader()
)

## Specialized graders

IntegralGrader(
answers={
'lower': 'a',
'upper': 'b',
'integrand': 'x^2',
'integration_variable': 'x'
},
input_positions={
'integrand': 1,
'lower': 2,
'upper': 3
},
variables=['a', 'b']
)

IntervalGrader(answers=['(','1','2',']'])

SumGrader(
answers={
'lower': 'a',
'upper': 'b',
'summand': 'x^2',
'summation_variable': 'x'
},
input_positions={
'summand': 1,
'lower': 2,
'upper': 3
},
variables=['a', 'b']
)

"""


def execute_code(all_code, course_key_str):
MaferMazu marked this conversation as resolved.
Show resolved Hide resolved
"""
Executes the provided code in a sandboxed environment with the specified course context.

Args:
all_code (str): The code to be executed.
course_key_str (str): The string representation of the course key.

Returns:
None
"""
course_key = CourseKey.from_string(course_key_str)
sandbox_service = SandboxService(
course_id=course_key,
contentstore=contentstore)
MaferMazu marked this conversation as resolved.
Show resolved Hide resolved
zip_lib = sandbox_service.get_python_lib_zip()

extra_files = []
python_path = []

if zip_lib is not None:
extra_files.append(("python_lib.zip", zip_lib))
python_path.append("python_lib.zip")

safe_exec(
all_code,
globals_dict={},
python_path=python_path,
extra_files=extra_files,
slug="integration-test",
limit_overrides_context=course_key_str,
unsafely=False,
)


if __name__ == "__main__":
course_key_str = "course-v1:MITx+grading-library+course"
MaferMazu marked this conversation as resolved.
Show resolved Hide resolved
execute_code(all_code, course_key_str)
Loading