Table of Contents
As the 2nd largest provider of carbohydrates in Africa, cassava is a key food security crop grown by small-holder farmers because it can withstand harsh conditions. At least 80% of small-holder farmer households in Sub-Saharan Africa grow cassava and viral diseases are major sources of poor yields.
In this competition, we introduce a dataset of 5 fine-grained cassava leaf disease categories with 9,436 labeled images collected during a regular survey in Uganda, mostly crowdsourced from farmers taking images of their gardens, and annotated by experts at the National Crops Resources Research Institute (NaCRRI) in collaboration with the AI lab in Makarere University, Kampala.
The dataset consists of leaf images of the cassava plant, with 9,436 annotated images and 12,595 unlabeled images of cassava leaves. Participants can choose to use the unlabeled images as additional training data. The goal is to learn a model to classify a given image into these 4 disease categories or a 5th category indicating a healthy leaf, using the images in the training data (participants can choose to use the unlabeled images in their training data). This competition is part of the fine-grained visual-categorization workshop (FGVC6 workshop) at CVPR 2019.
Acknowledgements We thank the different experts and collaborators from NaCRRI for assisting in preparing this dataset
Citation Please cite this paper if you use the dataset for your project: https://arxiv.org/pdf/1908.02900.pdf
- Clone the repo and extract zip file
git clone https://github.com/FreckledMe/cassava.git
- Open a terminal via this extracted folder location
pip install virtualenv
- Create environment for project
python<version> -m venv <virtual-environment-name>
- Activate environment
environment_path\<virtual-environment-name>\Scripts\activate
- Install required libraries
(virtual-environment-name) pip install -r requirements.txt
- Run cassava_notebook.ipynb
Use Streamlit
If you don't want to train the model, you can use my pre-trained model
(virtual-environment-name)/streamlit run stream.py
Example result
View loss and accuracy in per epoch via Tensorboard
(virtual-environment-name)/tensorboard --logdir logs
Example result