Skip to content

tomseimandi/explainability

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 

Repository files navigation

Resources on XAI :

General

NLP :

Time series :

  • Evaluation of time series by perturbating subsequences of the data
    • Basic idea is to observe the drop in classification quality if perturbating the most important features both i) in isolation and ii) together with the following timestamps
  • Temporal Saliency Rescaling
    • The main idea behind Temporal Saliency Rescaling is to divide feature importance attribution into two steps: first identify the important time steps and then within the time step the important feature. This should avoid the alleged issue that saliency methods tend to select a full time step as important although only a few features might be relevant
    • Unclear how the evaluation in fact works. Seems like the main idea is to just look at precision/recall for perturbed time series
  • Benchmarking Deep Learning Interpretability in Time Series Predictions
    • The main idea is a network architecture that allows to calculate a saliency map separately for the time and feature domain

Imagery :

Evaluating a saliency map vs. evaluating a saliency method :

About

Resources on explainable AI

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published