Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add CI testing for Rocky Linux with integration test, and distinguish rebench return codes #282

Merged
merged 4 commits into from
Jan 26, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
57 changes: 57 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -114,3 +114,60 @@ jobs:

- name: Run Unit Tests
run: python3 -m pytest

test-rocky:
name: "Rocky Linux: Python 3.9"
runs-on: ubuntu-latest
container:
image: rockylinux/rockylinux:9
steps:
- name: Checkout ReBench
uses: actions/checkout@v4

- name: Install basic tools
run: dnf install -y which time sudo python3-pip

- name: Run Tests and Package in venv
run: |
python3 -m pip install virtualenv
python3 -m virtualenv venv
source venv/bin/activate

pip install pytest
pip install .

pytest
(cd rebench && rebench -D ../rebench.conf e:TestRunner2)

python3 setup.py sdist build
python3 setup.py sdist bdist_wheel

- name: Install built package globally
run: pip install dist/*.whl

- name: Run integration test
run: |
set +e
cd rebench
rebench -c ../rebench.conf e:TestRunner2
REBENCH_EXIT=$?

echo "rebench exit code: $REBENCH_EXIT"

if [ "$REBENCH_EXIT" -ne "0" ]; then
echo "rebench failed unexpectedly"
exit $REBENCH_EXIT
fi

if [ ! -f test.data ]; then
echo "test.data not found"
exit 1
fi

EXPECTED=80
LINES=$(cat test.data | grep total | wc -l)
if [ "$LINES" -ne "$EXPECTED" ]; then
echo "test.data has unexpected number of lines: $LINES"
echo "expected: $EXPECTED"
exit 1
fi
16 changes: 12 additions & 4 deletions rebench/rebench.py
Original file line number Diff line number Diff line change
Expand Up @@ -323,24 +323,32 @@ def execute_experiment(self, runs, use_nice, use_shielding):
" using the reported settings.")
return executor.execute()

EXIT_CODE_SUCCESS = 0
EXIT_CODE_BENCHMARK_FAILED = 1
EXIT_CODE_ABORTED = 2
EXIT_CODE_UI_ERROR = 3
EXIT_CODE_EXCEPTION = 4

def main_func():
try:
rebench = ReBench()
return 0 if rebench.run() else -1
if rebench.run():
return EXIT_CODE_SUCCESS
else:
return EXIT_CODE_BENCHMARK_FAILED
except KeyboardInterrupt:
ui = UI()
ui.debug_error_info("Aborted by user request\n")
return -1
return EXIT_CODE_ABORTED
except UIError as err:
ui = UI()
ui.error("\n" + err.message)
return -1
return EXIT_CODE_UI_ERROR
except BenchmarkThreadExceptions as exceptions:
ui = UI()
for ex in exceptions.exceptions:
ui.error(str(ex) + "\n")
return -1
return EXIT_CODE_EXCEPTION


if __name__ == "__main__":
Expand Down
2 changes: 1 addition & 1 deletion rebench/tests/denoise_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ def test_minimize(self):
result = minimize_noise(False, self.ui, True)
self.assertIsInstance(result.succeeded, bool)
self.assertIsInstance(result.use_nice, bool)
self.assertIsInstance(result.use_shielding, bool)
self.assertIsInstance(result.use_shielding, (bool, str))

# if it was successful, try to restore normal settings
if result.succeeded:
Expand Down
7 changes: 5 additions & 2 deletions rebench/tests/environment_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,14 +30,17 @@ def test_environment(self):
self.assertGreater(len(env["userName"]), 0)
self.assertGreater(len(env["hostName"]), 0)
self.assertGreater(len(env["osType"]), 0)
self.assertGreater(len(env["cpu"]), 0)


self.assertTrue("manualRun" in env)
self.assertGreater(env["memory"], 0)
self.assertGreaterEqual(env["clockSpeed"], 0)

self.assertGreaterEqual(len(env["software"]), 3)

if "cpu" in env:
self.assertGreater(len(env["cpu"]), 0)
self.assertGreaterEqual(env["clockSpeed"], 0)

def test_extract_base(self):
self.assertEqual('', extract_base(''))
self.assertEqual('branch', extract_base('branch'))
Expand Down
Loading