Skip to content

Commit

Permalink
Initial README (#22)
Browse files Browse the repository at this point in the history
* Initial README

* program -> solution

* Update README.md

* Update README.md

Co-authored-by: Tomasz Nowak <[email protected]>

* adjustments and installation steps for non-linuxes

* image size adjustment

* Fix merge errors

---------

Co-authored-by: Mateusz Masiarz <[email protected]>
  • Loading branch information
tonowak and MasloMaslane authored Jun 17, 2023
1 parent 6781e97 commit 4dccc10
Show file tree
Hide file tree
Showing 4 changed files with 87 additions and 32 deletions.
69 changes: 60 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,62 @@
# sinol-make
CLI tool for creating sio2 task packages. \
Currently in development and not yet ready to be used.
# <img src="https://avatars.githubusercontent.com/u/2264918?s=200&v=4" height=60em> sinol-make

## Installing from source
`pip3 install .`
`sinol-make` is a CLI tool for creating and verifying problem packages
for [sio2](https://github.com/sio2project/oioioi)
with features such as:
- measuring time and memory in the same deterministic way as sio2,
- running the solutions in parallel,
- keeping a git-friendly report of solutions' scores,
- catching mistakes in the problem packages as early as possible,
- and more.

# Contents

- [Why?](#why)
- [Installation](#installation)
- [Usage](#usage)
- [Configuarion](#configuration)
- [Reporting bugs and contributing code](#reporting-bugs-and-contributing-code)

### Why?

The purpose of the tool is to make it easier to create good problem packages
for official competitions, which requires collaboration with other people
and using a multitude of "good practices" recommendations.
While there are several excellent CLI tools for creating tests and solutions,
they lack some built-in mechanisms for verifying packages and finding mistakes
before uploading the package to the judge system.
As sinol-make was created specifically for the sio2 problem packages,
by default it downloads and uses sio2's deterministic mechanism of measuring
solutions' runtime, called `oiejq`.

### Installation

It's possible to directly install [sinol-make](https://pypi.org/project/sinol-make/)
through Python's package manager pip, which usually is installed alongside Python:

```
pip3 install sinol-make
```

As `oiejq` works only on Linux-based operating systems,
*we do not recommend* using operating systems such as Windows or macOS.
Nevertheless `sinol-make` supports those operating systems,
though there are additional installation steps required to use
other tools for measuring time (which are non-deterministic and produce reports different from sio2):
- Windows (WSL): `apt install time timeout`
- macOS: `brew install gnu-time coreutils`

### Usage

The availabe commands (see `sinol-make --help`) are:

- `sinol-make run` -- Runs selected solutions (by default all solutions) on selected tests (by default all tests) with a given number
of CPUs. Measures the solutions' time with oiejq, unless specified otherwise. After running the solutions, it
compares the solutions' scores with the ones saved in config.yml.
Run `sinol-make run --help` to see available flags.

### Reporting bugs and contributing code

- Want to report a bug or request a feature? [Open an issue](https://github.com/sio2project/sinol-make/issues).
- Want to help us build `sinol-make`? Create a Pull Request and we will gladly review it.

## Running tests
1. Install `sinol-make` with test dependencies: \
```pip3 install .[tests]```
2. Run `pytest` in root directory of this repository.
24 changes: 14 additions & 10 deletions src/sinol_make/commands/run/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,14 +23,18 @@ def get_name(self):
def configure_subparser(self, subparser):
parser = subparser.add_parser(
'run',
help='Run current task',
description='Run current task'
help='Runs solutions in parallel on tests and verifies the expected solutions\' scores with the config.',
description='Runs selected solutions (by default all solutions) \
on selected tests (by default all tests) \
with a given number of cpus. \
Measures the solutions\' time with oiejq, unless specified otherwise. \
After running the solutions, it compares the solutions\' scores with the ones saved in config.yml.'
)

default_timetool = 'oiejq' if sys.platform == 'linux' else 'time'

parser.add_argument('--programs', type=str, nargs='+',
help='programs to be run, for example prog/abc{b,s}*.{cpp,py}')
parser.add_argument('--solutions', type=str, nargs='+',
help='solutions to be run, for example prog/abc{b,s}*.{cpp,py}')
parser.add_argument('--tests', type=str, nargs='+',
help='tests to be run, for example in/abc{0,1}*')
parser.add_argument('--cpus', type=int,
Expand All @@ -39,8 +43,8 @@ def configure_subparser(self, subparser):
parser.add_argument('--ml', type=float, help='memory limit (in MB)')
parser.add_argument('--hide_memory', dest='hide_memory', action='store_true',
help='hide memory usage in report')
parser.add_argument('--program_report', type=str,
help='file to store report from program executions (in markdown)')
parser.add_argument('--solutions_report', type=str,
help='file to store report from solution executions (in markdown)')
parser.add_argument('--time_tool', choices=['oiejq', 'time'], default=default_timetool,
help='tool to measure time and memory usage (default when possible: oiejq)')
parser.add_argument('--oiejq_path', type=str,
Expand Down Expand Up @@ -480,7 +484,7 @@ def compile_and_run(self, solutions):
for solution in solutions]
compiled_commands = zip(solutions, executables, compilation_results)
names = solutions
return self.run_solutions(compiled_commands, names, solutions, self.args.program_report)
return self.run_solutions(compiled_commands, names, solutions, self.args.solutions_report)


def print_expected_scores(self, expected_scores):
Expand All @@ -499,7 +503,7 @@ def validate_expected_scores(self, results):

config_expected_scores = self.config.get("sinol_expected_scores", {})
used_solutions = results.keys()
if self.args.programs == None and config_expected_scores: # If no solutions were specified, use all programs from config
if self.args.solutions == None and config_expected_scores: # If no solutions were specified, use all solutions from config
used_solutions = config_expected_scores.keys()

used_groups = set()
Expand Down Expand Up @@ -549,7 +553,7 @@ def validate_expected_scores(self, results):
added_groups.add(group[0])
elif type == "remove":
# We check whether a solution was removed only when sinol_make was run on all of them
if field == '' and self.args.programs == None and config_expected_scores:
if field == '' and self.args.solutions == None and config_expected_scores:
for solution in change:
removed_solutions.add(solution[0])
# We check whether a group was removed only when sinol_make was run on all of them
Expand Down Expand Up @@ -753,7 +757,7 @@ def run(self, args):
self.groups = list(sorted(set([self.get_group(test) for test in self.tests])))
self.possible_score = self.get_possible_score(self.groups)

solutions = self.get_solutions(self.args.programs)
solutions = self.get_solutions(self.args.solutions)
results = self.compile_and_run(solutions)
validation_results = self.validate_expected_scores(results)
self.print_expected_scores_diff(validation_results)
8 changes: 4 additions & 4 deletions tests/commands/run/test_integration.py
Original file line number Diff line number Diff line change
Expand Up @@ -124,17 +124,17 @@ def test_flag_tests(create_package, time_tool):
assert command.tests == ["in/abc1a.in"]


def test_flag_programs(capsys, create_package, time_tool):
def test_flag_solutions(capsys, create_package, time_tool):
"""
Test flag --programs.
Checks if correct programs are run (by checking the output).
Test flag --solutions.
Checks if correct solutions are run (by checking the output).
"""
package_path = create_package
command = get_command()
create_ins_outs(package_path, command)

parser = configure_parsers()
args = parser.parse_args(["run", "--programs", "prog/abc1.cpp", "prog/abc2.cpp", "--time_tool", time_tool])
args = parser.parse_args(["run", "--solutions", "prog/abc1.cpp", "prog/abc2.cpp", "--time_tool", time_tool])
command = Command()
command.run(args)

Expand Down
18 changes: 9 additions & 9 deletions tests/commands/run/test_unit.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ def test_get_executable():
assert command.get_executable("abc.cpp") == "abc.e"


def test_compile_programs(create_package):
def test_compile_solutions(create_package):
package_path = create_package
command = get_command(package_path)
solutions = command.get_solutions(None)
Expand Down Expand Up @@ -105,7 +105,7 @@ def test_calculate_points():
def test_run_solutions(create_package, time_tool):
package_path = create_package
command = get_command(package_path)
command.args = argparse.Namespace(program_report=False, time_tool=time_tool)
command.args = argparse.Namespace(solutions_report=False, time_tool=time_tool)
create_ins_outs(package_path, command)
command.tests = command.get_tests(None)
command.groups = list(sorted(set([command.get_group(test) for test in command.tests])))
Expand Down Expand Up @@ -157,7 +157,7 @@ def test_validate_expected_scores_success():
command.scores = command.config["scores"]

# Test with correct expected scores.
command.args = argparse.Namespace(programs=["prog/abc.cpp"], tests=None)
command.args = argparse.Namespace(solutions=["prog/abc.cpp"], tests=None)
results = {
"abc.cpp": {1: "OK", 2: "OK", 3: "OK", 4: "OK"},
}
Expand All @@ -166,7 +166,7 @@ def test_validate_expected_scores_success():
assert results.removed_solutions == set()

# Test with incorrect result.
command.args = argparse.Namespace(programs=["prog/abc.cpp"], tests=None)
command.args = argparse.Namespace(solutions=["prog/abc.cpp"], tests=None)
results = {
"abc.cpp": {1: "OK", 2: "OK", 3: "OK", 4: "WA"},
}
Expand All @@ -175,7 +175,7 @@ def test_validate_expected_scores_success():
assert len(results.changes) == 1

# Test with removed solution.
command.args = argparse.Namespace(programs=None, tests=None)
command.args = argparse.Namespace(solutions=None, tests=None)
results = {
"abc.cpp": {1: "OK", 2: "OK", 3: "OK", 4: "OK"},
"abc1.cpp": {1: "OK", 2: "OK", 3: "OK", 4: "WA"},
Expand All @@ -188,7 +188,7 @@ def test_validate_expected_scores_success():

# Test with added solution and added group.
command.config["scores"][5] = 0
command.args = argparse.Namespace(programs=["prog/abc.cpp", "prog/abc5.cpp"], tests=None)
command.args = argparse.Namespace(solutions=["prog/abc.cpp", "prog/abc5.cpp"], tests=None)
results = {
"abc.cpp": {1: "OK", 2: "OK", 3: "OK", 4: "OK", 5: "WA"},
"abc5.cpp": {1: "OK", 2: "OK", 3: "OK", 4: "OK", 5: "WA"},
Expand All @@ -199,7 +199,7 @@ def test_validate_expected_scores_success():
assert len(results.added_groups) == 1

# Test with removed group.
command.args = argparse.Namespace(programs=["prog/abc.cpp"], tests=None)
command.args = argparse.Namespace(solutions=["prog/abc.cpp"], tests=None)
results = {
"abc.cpp": {1: "OK", 2: "OK", 3: "OK"},
}
Expand All @@ -208,7 +208,7 @@ def test_validate_expected_scores_success():
assert len(results.removed_groups) == 1

# Test with correct expected scores and --tests flag.
command.args = argparse.Namespace(programs=["prog/abc.cpp"], tests=["in/abc1a.in", "in/abc2a.in"])
command.args = argparse.Namespace(solutions=["prog/abc.cpp"], tests=["in/abc1a.in", "in/abc2a.in"])
results = {
"abc.cpp": {1: "OK", 2: "OK"},
}
Expand All @@ -222,7 +222,7 @@ def test_validate_expected_scores_fail(capsys):
command.scores = command.config["scores"]

# Test with missing points for group in config.
command.args = argparse.Namespace(programs=["prog/abc.cpp"], tests=None)
command.args = argparse.Namespace(solutions=["prog/abc.cpp"], tests=None)
results = {
"abc.cpp": {1: "OK", 2: "OK", 3: "OK", 4: "OK", 5: "OK"},
}
Expand Down

0 comments on commit 4dccc10

Please sign in to comment.