You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
adversarial (via OpenAttack) has more than twice the execution time than Polyjuice which already takes quite a while. Since CFEs already cover a similar operation, OpenAttack is not part of the roadmap anymore. The long-term plan is to train one multi-purpose model than can reasonably perturb text for generating adversarial attacks, counterfactuals and general data augmentation at once.
interact (via HEDGE) is not possible to implement, because hierarchical explanations don't have an obvious natural language representation. Visualizations are not part of the agenda as of now.
rules (via Anchors) does not appear to return rules that inherently make sense (mostly single tokens) and takes very long to compute.
rationalize (via OpenAI API or a rationalizing LLM) will be implemented soon.
Use GPT-3.5 / -4 to generate a few hundred rationales in a zero-shot setup
Fine-tune a T5 for each dataset of rationales
Run inference with the fine-tuned T5 to produce rationales for the rest of the datasets (because using ChatGPT for the tens of thousands of examples in BoolQ, OLID & DD is too expensive)
Store generated rationales as CSVs or JSONs (see pre-computed feature attribution explanations in the cache folder for reference)
The text was updated successfully, but these errors were encountered: