-
Notifications
You must be signed in to change notification settings - Fork 156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ENH] Replace prts
metrics
#2400
base: main
Are you sure you want to change the base?
Conversation
Thank you for contributing to
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some of these functions look like they should be private/protected. In the tests please run both the new and current functions to ensure the output is the same (using pytest)
aeon/benchmarking/metrics/anomaly_detection/tests/test_metrics.py
Outdated
Show resolved
Hide resolved
Changes made from the previous commit:
|
@MatthewMiddlehurst Please let me know if the test cases are fine, I can separate them into different functions if necessary. |
thanks for this, its really helpful. We will look again next week |
Yes yes, it's just that I got carried away trying to implement classic metrics (obviously not what the issue stated). |
Also, value errors will rise when |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its fine to use bits of documentation from the other function when applicable. I'll leave the input type call to Sebastian but bit odd to change it IMO.
Again i would like the output of the current functions verified against the current ones or the current functions to be used in testing for now just to verify that the output is the same.
@MatthewMiddlehurst There is repetition of documentation in public functions so changing them to point towards other functions would be a bad idea. |
I'm not quite sure what you mean. In case there is confusion of the original issue, the end goal is to remove the other version of these functions eventually. |
The implementation supports range metrics (ts_precision, ts_recall, ts_fscore) and removes the dependency on |
Also, there are some changes to be made:
I'll update in some time. |
@MatthewMiddlehurst @SebastianSchmidl @TonyBagnall changes from my side are done, please review when free and suggest changes if any :) |
I may just be missing something here. Why would there be repetition in documentation if you use documentation from the functions we are removing and replacing. The Could you please:
|
I meant for the functions precision and recall in the newer implementation, they have the same docstrings. |
The already existing implementation (_binary.py) takes inputs as binary values (arrays). The one we are trying to replace with, takes inputs as list of tuples. So the changes would be to convert the list of tuples to binary arrays for compatibility with already existing functions? Thanks for pointing out the conversion, haven't checked the input data type for _binary.py. |
It would be fine to accept both and convert to either internally. |
@MatthewMiddlehurst @SebastianSchmidl I've added a function to check if the inputs are binary, if yes, they get converted to ranges based tuples for further calulations. Also added the existing test case from This can now take range based or binary inputs. |
A bit of help needed with the errors. All the tests pass locally. |
@MatthewMiddlehurst Thanks for the help! Anymore changes necessary? |
This is still missing, right? |
The updated test compares the results of the previous version (_binary.py) and the new version (range_metrics). Also, let me know if you want me to remove some test cases, these cover all the edge cases. |
Reference Issues/PRs
Addresses #2066
What does this implement/fix? Explain your changes.
Implementation of Precision, Recall and F1-score metrics.
PR checklist
For all contributions