Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Predict confidence #63

Open
LouisCarpentier42 opened this issue Dec 10, 2024 · 1 comment
Open

Predict confidence #63

LouisCarpentier42 opened this issue Dec 10, 2024 · 1 comment

Comments

@LouisCarpentier42
Copy link
Collaborator

Should we implement the method described in [1] to quantify the confidence of an anomaly detector. This quantity might be useful to measure the uncertainty of an anomaly detector. The challenge is that the approach proposed in [1] assumes the data to be i.i.d. (independent and identically distributed), which is typically not the case for time series data. Nevertheless, it would be useful to see how this approach works with time series anomaly detectors.

[1] Lorenzo Perini, Vincent Vercruyssen, and Jesse Davis. Quantifying the confidence of anomaly detectors in their example-wise predictions. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 227–243. Springer, 2020. https://doi.org/10.1007/978-3-030-67664-3_14

@LouisCarpentier42
Copy link
Collaborator Author

If this method is implemented, then we can simply add it as an additional method within the BaseDetector, as it will simply call decision_function or predict_proba, after which some computations are done.

It might also be interesting to add a new visualization function, similar to plot_anomaly_scores, which shows the confidence visually as a bound around the anomaly scores.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant