Skip to content

Latest commit

 

History

History
56 lines (31 loc) · 3.48 KB

README.md

File metadata and controls

56 lines (31 loc) · 3.48 KB

augmentation

Deep neural networks (DNN) are to be notoriously brittle to small perturbations in their input data. Like the picture below, an image classification model, even with 99% accuracy on its testing dataset, could misclassify the input variations generated by simple spatial transformation (e.g., rotation). This is the concept of "Robustness of Deep Neural Networks".

image

Approach

Recent studies showed that we can increase the robustness of DNN models by applying natural variations to the dataset. However, applying all-natural variations to the input data costs too much time. Therefore we used a genetic algorithm to generate the most suitable variant of the input data. In addition, we propose various genetic algorithms with three different selection methods in a genetic algorithm and test their performances. This approach originally comes from https://www.comp.nus.edu.sg/~gaoxiang/papers/Sensei_ICSE20.pdf.

image

Usage

The experiments can be conducted with the following command and all the experiment configurations are handled by the config file (check configs/).

python main.py configs/{config_file}.json

An agent is chosen from the config file in the argument and the corresponding agent's run() procedure is called. After run() procedure is done, agent's finalize() procedure is called.

Experiments

We newly implemented code for SENSEI that is an automatic data augmentation framework for DNNs. Then we conducted three experiments to see whether the performance of our re-implementation follows the trends shown in the original paper. All of our experimental results can be found in the following link. https://tensorboard.dev/experiment/hRrnCQ0PRyaATOyBojoWbQ/#scalars

1. Reproducing the results of the paper experiment

We verify the implementation of our framework by observing the performance of the ResNet-50 model involving three different data-augmenting strategies: RANDOM (blue), W-10 (orange), and SENSEI (green). A plain model training without applying data augmentation (red) represents the baseline for all other strategies.

image

2. Robustness of the Framework

Repetition of Experiment 1 with another implementation of ResNet-50. We utilize the API from an open-source library, torchvision, to instantiate any neural network model, including ResNet-50, that is provided in the model zoo of the library.

image

3. Extension to Genetic Algorithm

Performance comparison using different data-augmenting strategies including two of our proposed variants: SENSEI-RW and SENSEI-T which modify the selection method of the genetic algorithm

image

Discussion

Discussions of the results can be found in our final report.
https://github.com/beyaldiz/augmentation/blob/readme/CS454_Final_Project.pdf

References