Skip to content

Latest commit

 

History

History
80 lines (55 loc) · 7.91 KB

README.md

File metadata and controls

80 lines (55 loc) · 7.91 KB

#Tensorflow Projects A repo of everything deep and neurally related. Implementations and ideas are largely based on papers from arxiv and implementations, tutorials from the internet.

  1. Folder Heirarchy
  2. Results

##Folders

  • ContextEncoder - Context Inpainting - a modified implementation based on the idea from Context Encoder: Feature Learning by Inpainting
  • Dataset_Reader - Common dataset readers and other files related to input file reading pipeline
  • Emotion Detection - Kaggle class problem.
  • Face Detection - Face Detection as a regression problem from kaggle.
  • Generative Networks - Attempts at generative models mostly done with strided convolution and it's transposes.
    • Image Analogy - Implementation based on Deep Visual Analogy-Making paper. Dataloader code is based on carpedm20 implementation.
    • Generative NeuralStyle(Johnson et al) - Needs further tuning.
  • ImageArt - Everything artistic with deep nets
    • DeepDream, LayerVisualization, NeuralStyle(Gatys et al), ImageInversion(Mahendran et al) - all implementations are VGG model based.
    • NeuralArtist(a mapping from location to rgb as an optimization problem - idea based on karpathy's convnet.js implementation)
  • MNIST - My first ever code in Tensorflow. Check this out if you are new to Deep learning and Tensorflow - based on tensorflow tutorial with additions here and there.
  • Model_pruning - Includes files used for experimenting with pruning in neural networks
  • Unsupervised_learning - AutoEncoders, VAEs and Generative Adversarial Networks (GAN) implementations
  • notMNIST - Well you got to follow up MNIST with something :D
  • logs - Tensorflow Summary and Saver directory for all problems.
  • There are a couple of more implementations as attempts to solve a few other problems
    • Deblurring - Posing blurring in images as conv net problem - architecture is based on Image super-resolution paper by Dong et al.
    • FindInceptionSimilarity - This implementation made me realize an important concept in machine learning in general - Symbolism vs Distributed representations.
  • TensorflowUtils - Since most of the time parameters are just given a default value.

##Results

  • Generative Adversarial Networks - Tensorflow implementation of DCGAN on Flowers dataset and celebA dataset. Used carpedm20 and Newmu's DCGAN code for reference. The sample outputs below are the generated images - Results in increading order of epochs from left to right.

    CelebA

    But the celebA dataset used here had faces that are well aligned - Can we do better with unaligned images? Well, the results are as below.

    Flowers

  • Face Inpainting - Tried Context Encoder on Labeled Faces in Wild dataset and these are the results

- Deep dreams

  • Visualizing the first filter of VGG layer conv5_3

  • Image Inversion - An implementation based on Mahendran/Vedaldi's paper. Note that the optimization objective didn't account for variation loss across image and hence the visible noisy patterns in the result.

  • NeuralArtist - Not exactly the best the network could do - but impatience got the better of me. The idea is to map a location to a RGB value and optimize a model to generate an image. If you squint a bit you will see the image better :)

  • An attempt at MNIST Autoencoder (3 bottleneck variables) - An idea borrowed from karpathy's convnet.js. As noticed in the convnet.js page running the encoder longer does reduce the error and the separation further. Here's a sample of the difference from start to 20k iterations. Different colors correspond to labels 0-9.

  • Image Analogy - it was interesting to see how the model tries to learn. The model corresponding to just image loss seems to optimize shape followed by color and scale, though this process seems painfully slow - Rotation optimization so far doesn't seem to be visible on the horizon. Left image corresponds to result on the most trained model and the right corresponds to intermediate result. (Will be getting back to this problem later...)

  • Composite Pattern Producing Networks - Somethings are best left random and unexplained. Fun little project with the simplest of code.

=== Checkout my neuralnetworks.thought-experiments for more in depth notes on Deep learning models and concepts.