Skip to content

One shot learning using siamese twins architecture on Omniglot dataset

Notifications You must be signed in to change notification settings

sidm-23/One-Shot-Learning-on-Omniglot-data

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 

Repository files navigation

One-Shot-Learning-on-Omniglot-data

One shot learning using siamese twins architecture on Omniglot dataset

Table of Contents

Show/Hide
  1. File Descriptions
  • Dataset
  • File Structure
  1. Technologies Used
  2. Structure
  3. Executive Summary
    • [ 1. Why use One-Shot Learning ](#Why use One-Shot Learning)

File Descriptions


  • Dataset : The Omniglot dataset is a collection of 1623 hand-drawn characters from 50 alphabets. For every character there are just 20 examples, each drawn by a different person at resolution 105x105. Each image is paired with stroke data, a sequence of [x,y,t] coordinates with time (t) in milliseconds. This data is split as 30 alphabets used for training and the other 20 used for validation of the model.

Raw Dataset

  • File Structure :

    image_background

    Gujarati

    Character01

    20 images

Tecnologies Used:

  • Python
  • Pandas
  • Numpy
  • Matplotlib
  • Open CV
  • Scikit-Learn
  • Keras
  • Tensorflow

Executive Summary

Why use One-Shot Learning

In conventional image processing, an image is put through a CNN(Convolution Neural Network) to extract features from which an object/edge is detected or classified, but this is very computationally intense and requires a very large training set to alleviate all bias. Another issue is that when a new set of training data is added the model has to be re-trained. So one-shot learning proposes to extract features from a small set of training data and make a faster prediction based on similarity. Ex Face unlock in phones take only multiple images of and feature for your face and store it on device but when a new face is added it does not have re-train the entire model, it extracts the features from each image and compares for similarity thus reducing the time it takes to unlock while still being reliable.

About

One shot learning using siamese twins architecture on Omniglot dataset

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published