Mobile technology is very fast growing and incredible, yet there are not many technological development and improvement for deaf and mute peoples. We are aiming to come up with a reliable platform to channelize communication between normal and deaf/mute person and vice versa.
To provide a communication channel to specially-abled people, specifically – deaf, mute and blind, using “gesture tracking and recognition” with the help of computer vision and deep learning.
In this project, We will use Indo-Pakistani Sign Language (IPSL) to train and test our primitive model. It will only be able to recognize hand gesture.
We investigate different machine learning techniques like:
Before running this project, make sure you have following dependencies -
- Dataset (Download the images from this link)
- Python 3.6
- pip
- OpenCV
Now, using pip install
command, include following dependencies
- Numpy
- Pandas
- Sklearn
- Scipy
- Opencv
- Tensorflow
To run the project, perform following steps -
- Put all the training and testing images in a directory and update their paths in the config file
common/config.py
. - Generate image-vs-label mapping for all the training images -
generate_images_labels.py train
. - Apply the image-transformation algorithms to the training images -
transform_images.py
. - Train the model(KNN & SVM) -
train_model.py <model-name>
. Note that the repo already includes pre-trained models for some algorithms serialized atdata/generated/output/<model-name>/model-serialized-<model-name>.pkl
. - Generate image-vs-label mapping for all the test images -
generate_images_labels.py test
. - Test the model -
predict_from_file.py <model-name>
.