-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AMBER tests could be clearer #65
Comments
Thanks for opening an issue. Sorry, I don't quite understand the issue, maybe because I'm not familiar with the amber parser. I wonder if you mind providing an example of test_invalidfiles? Thanks. |
Right now we have, in test_amber.py in alchemlyb:
with in alchemtest:
So essentially all "invalid_files" are loaded and checked, but from the errors reported it's not so easy to understand what it was testing and where it went wrong. Instead of a common test_invalidfiles() we could have a test for each file in the "invalid_files" directory, with the proper name:
something along this line... I don't know if it creates any problems, or if it breaks some common-practice! |
I will close this one as it is not relevant now. |
If the concern is that
Shall we just name the files as So instead of src/alchemlyb/tests/parsing/test_amber.py::test_invalidfiles[/home/runner/micromamba-root/envs/test/lib/python3.8/site-packages/alchemtest/amber/invalidfiles/invalid-case-1.out.bz2] we got src/alchemlyb/tests/parsing/test_amber.py::test_invalidfiles[/home/runner/micromamba-root/envs/test/lib/python3.8/site-packages/alchemtest/amber/invalidfiles/contains_no_useful_data.out.bz2] Which should tell people which test behaving wrongly. If you felt that this is too ugly we could change the format of load_invalidfiles() Give that
We could have
My concern is that if we have
In alchemtest, then these tests are not having the some format as the rest of the files in the repo like those load_simplesolvated, load_bace_example. |
I think it is perfectly right to have these files in a different format than the other test files present to test amber. The thing is, these files are used to test completely different things. The files loaded by loadsimplesolvated/load_bace_example are entire data sets, created from some simulation, and represent a "real case scenario". Franckly, some of the errors in the files in here are hard if not impossible to find in real-world scenarios, but I think it's fine to have them in here. In my opinion, it's not good practice to have a single test that checks all invalid files at once, as we have right now, but we should have a test for each possible invalid file. Moreover, for example, the three test files that I would add in an update (issue #69) are needed to test some code branches that are currently not tested, but that are strictly not "invalid", just that would rise warnings to the user. |
Frankly, I'm not too happy with the current structure of the alchemtest as well (see #56 that I raised a year ago), as there are two conflicting goals for this repo. I think currently, the primary goal of the alchemtest is to offer the user some ready-made dataset to test the alchemlyb or other alchemistry software. The secondary goal is then to provide test files for alchemlyb for the developers to use. To achieve the primary goal, we don't want to confuse the user with too many datasets when one does
under My thought is to have all the test files in Then in the
|
Now I understand, I viewed alchemtest just as a suite of tests for the developer! So, right now a user can use As far as I understand def load_invalidfiles():
"""Load a single file to be used to run tests on the amber parser functionalities.
Returns
-------
data : Bunch
Dictionary-like object, the interesting attributes are:
- 'data' : the requested file
- 'DESCR': the full description of the file
"""
module_path = dirname(__file__)
data = join(module_path, 'invalidfiles', f'{filename}.out.bz2')
with open(join(module_path, 'invalidfiles', f'{filename}.descr.rst')) as rst_file:
fdescr = rst_file.read()
return Bunch(data=data,
DESCR=fdescr) in this way in the test_something(filename=load_invalidfiles("file_needed")['data']):
"do some test with filename" is that something doable? I don't think it will make the situation worse for the common user. |
Yes. I'm happy with grabbing the test file like
But I struggle to see how would
Achieve this? |
Sorry, my bad, the function declaration should be def load_invalidfiles(filename):
.... .... |
I think this is a good API call. I'm not sure if the doc would be correctly generated but if the doc is fine, I think this is a reasonable way of calling the test. |
I fully support better naming the keys of the disparate test files. I can also get behind renaming As #65 (comment) stated, we want to keep our namespace reasonably clean so I'd like to keep a single accessor for all the test files, i.e., one DESCR etc. I'd make the argument of I don't quite see how |
My original proposal in #65 (comment) is that the single test file could be accessed through Without reading through this PR, I don't have time right now. I think in #65 (comment), @DrDomenicoMarson is suggesting that the same file could be accessed through I think the advantage of
The way I see
|
The currently API-compatible way to get single files is EDIT: The above is incorrect, should be |
Is breaking the API and potentially having to fix the docs build worth the effort of not writing |
No, we don't.
In fact, I think all the XXX.data returns a dictionary? |
Sorry, you're right.
So accessing a single file would be |
I agree.
It makes sense to create a new |
Sorry for the confusion. When I say |
Ok, fully agree with you. |
Hello everyone,
I'm testing my changes with the AMBER parser, and I was thinking about the reasoning behind having just one test "test_invalidfiles" to try a bunch of invalid files, all of which have the same name, having every time to check a
descr
file to know what was wrong. I thought it would be much easier (as pytest returns the name of the file read and the name of the test that failed) ifa) we rename the files with proper descriptive names (not a good idea I think)
b) divide
test_invalidfiles
into one test for each file/failure case, in this way it's easier to add new tests for possible invalid files, and the pytest output in case of error would be more informative.I don't know if this suggestion is against some common behavior or paradigm, as it's the first time I'm using pytest and contributing to a public repository.
The text was updated successfully, but these errors were encountered: