diff --git a/2024/2024_04_22_TIME_Text-To-Image_For_Counterfactual_Explanations/README.md b/2024/2024_04_22_TIME_Text-To-Image_For_Counterfactual_Explanations/README.md new file mode 100644 index 0000000..2aa24c0 --- /dev/null +++ b/2024/2024_04_22_TIME_Text-To-Image_For_Counterfactual_Explanations/README.md @@ -0,0 +1,9 @@ +# TIME (Text-to-Image Models for Counterfactual Explanations) + +Finding counterfactual explanations for image classification is a challenging task. The feature space of images is very sparse and multi-dimensional. Thus, to suffice the plausibility property of CEs, the method should produce images that are visually similar to the original, yet the modification should be human-understandable - adding random noise won't explain the model decision. +This task poses several challenging characteristics for image classification. It was commonly done before with diffusion models that were hinted at using the classifier's gradients or by taking advantage of the model's inner structure. + +In next week's presentation, I will talk about the first black-box method of generating Counterfactual Explanations for image classification. TIME (Text-to-Image Models for Counterfactual Explanations) is a novel method that generates CEs using diffusion models combining two different ideas - textual inversion, and EDICT- which create an ingenious method of explaining the image classifiers + + +Presentation will is based on [this paper](https://arxiv.org/pdf/2309.07944.pdf) \ No newline at end of file diff --git a/2024/2024_04_22_TIME_Text-To-Image_For_Counterfactual_Explanations/TIME.pdf b/2024/2024_04_22_TIME_Text-To-Image_For_Counterfactual_Explanations/TIME.pdf new file mode 100644 index 0000000..46a27f1 Binary files /dev/null and b/2024/2024_04_22_TIME_Text-To-Image_For_Counterfactual_Explanations/TIME.pdf differ diff --git a/README.md b/README.md index 05a0e92..5541086 100644 --- a/README.md +++ b/README.md @@ -18,9 +18,9 @@ Join us at https://meet.drwhy.ai. * 25.03.2024 - Law in AI - Andrzej Porębski * 08.04.2024 - [Introduction to counterfactual explanations track](https://github.com/MI2DataLab/MI2DataLab_Seminarium/tree/master/2024/2024_04_08_Intro_to_CEs) - Mateusz Krzyziński, Bartek Sobieski * 15.04.2024 - [The Privacy Issue of Counterfactual Explanations: Explanation Linkage Attacks](https://github.com/HubertR21/MI2DataLab_Seminarium/tree/master/2024/2024_04_15_explanation_linkage_attacks) - Mikołaj Spytek -* 22.04.2024 - Text-to-Image Models for Counterfactual Explanations: A Black-Box Approach - Tymoteusz Kwieciński +* 22.04.2024 - [Text-to-Image Models for Counterfactual Explanations: A Black-Box Approach](https://github.com/MI2DataLab/MI2DataLab_Seminarium/tree/master/2024/2024_04_22_TIME_Text-To-Image_For_Counterfactual_Explanations) - Tymoteusz Kwieciński * 06.05.2024 - GLOBE-CE: A Translation Based Approach for Global Counterfactual Explanations - Piotr Wilczyński -* 13.05.2024 - Introduction to ViT and tranformer attributions - Filip Kołodziejczyk +* 13.05.2024 - Introduction to ViT and transformer attributions - Filip Kołodziejczyk * 20.05.2024 - TBD * 27.05.2024 - TBD * 03.06.2024 - TBD