Skip to content

Latest commit

 

History

History
78 lines (45 loc) · 4.16 KB

deep-neural-networks-are-easily-fooled-high-confidence-predictions-for-unrecognizable-images.md

File metadata and controls

78 lines (45 loc) · 4.16 KB

Deep neural networks are easily fooled: High confidence predictions for unrecognizable images (2015), A. Nguyen et al.

contributors: @GitYCC

[paper] [code]


TR;DR: It is easy to produce images that are completely unrecognizable to humans, but that state-of-the- art DNNs believe to be recognizable objects with 99.99% confidence

introduction video

https://youtu.be/M2IebCN9Ht4

introduction video

What differences remain between computer and human vision?

not easy to prevent fooling DNN

We also find that, for MNIST DNNs, it is not easy to prevent the DNNs from being fooled by retraining them with fooling images labeled as such. While retrained DNNs learn to classify the negative examples as fooling images, a new batch of fooling images can be produced that fool these new networks, even after many retraining iterations.

deep neural network models

  • AlexNet DNN (already-trained, provided by the Caffe software package), called “ImageNet DNN” in this paper
    • It obtains 42.6% top-1 error rate on ImageNet.
  • LeNet model (already-trained, provided by the Caffe software package), called “MNIST DNN” in this paper
    • This model obtains 0.94% error rate on MNIST.

generating images with evolution

Evolutionary algorithms (EAs)

cite from: https://towardsdatascience.com/introduction-to-evolutionary-algorithms-a8594b484ac

MAP-Elites (multi-dimensional archive of phenotypic elites)

  • kind of EAs
  • Traditional EAs optimize solutions to perform well on one objective, or on all of a small set of objectives (e.g. one objective means a single ImageNet class).
  • MAP-Elites enables us to simultaneously evolve a population that contains individuals that score well on many classes (e.g. all 1000 ImageNet classes).
  • A. Cully, J. Clune, and J.-B. Mouret. Robots that can adapt like natural animals.

Genetic Operators: direct encoding and indirectly encoding

  • direct encoding: each pixel value is initialized with uniform random noise within the [0, 255] range, independently mutated between pixels (altered via the polynomial mutation operator)
  • indirectly encoding: tend to be regular because elements in the genome can affect multiple parts of the image

showing that CPPN-encoded evolution can produce images that humans and DNNs can recognize

Results

Evolving irregular images to match MNIST

Evolving regular images to match MNIST

Evolving regular images to match ImageNet

Reference