Skip to content

Adversarial attack method against text classifiers.

Notifications You must be signed in to change notification settings

Abhi24krishna/FastAttack

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FastAttack

An adversarial attack method against text classifiers which uses FastText Embeddings for transformations. Open In Colab

To run: Using Google Colab with GPU runtime is recommended. It will help you save time by not installing a ton of dependencies!

  1. Make a copy of the FastAttack-models folder from https://drive.google.com/drive/folders/1izR5sO08up0LJ3-g7qtO30iHfNnaxFAU?usp=sharing to your drive.

  2. Open FastAttack.ipynb on Colab.

  3. Load fasttext.model by changing the path appropriately.

  4. The target models and datasets can be changed at the locations pointed in the notebook. List of available models and datasets is available here: https://textattack.readthedocs.io/en/latest/3recipes/models.html#available-models

  5. Run the attack for a certain number of examples. The attack results and a summary will be printed on the console.

About

Adversarial attack method against text classifiers.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published