diff --git a/README.md b/README.md index 7467660..51c6f2f 100644 --- a/README.md +++ b/README.md @@ -17,14 +17,30 @@ python ``` ## Usage -- Baseline Model: Baseline Model Code -- Spectrogram Maker & Lucas Dataset Processing: Code #51404 -- CNN & KNN model: code #49213 (Jupyter notebook runs whole process & Python files contains the function codes) +(Jupyter notebook runs whole process & Python files contains the function codes) +- Spectrogram Maker & Lucas Dataset Processing: /Make and clean dataset +- Baseline Model: /Baseline Model Code +- CNN & KNN model: /KNN and CNN model +- Visualisation/frond-end of the product to use the models on: /Visualisation This code is to predict the organic corbon values based on the reflectence given of an area. To run the program, one can first use the trained model or train the model themselves. To train a model you use one of the notebooks in the code #49213 folder. Here you have different models one can use to train. +## make dataset +This code is used to process and clean the dataset that is used to run the model and programs. The data used comes from [https://esdac.jrc.ec.europa.eu/], Be suere +to change to location of the folders that are refered to in the code to your own location when running the programs. This may includes other scripts as well. The dataset +can also be ofcourse be of a different kind to your liking, be sure to adapt the code (mainly those that filter/clean) to fit your dataset. + +Further documentation can be found in the readMe within the folder + +## Baseline model +After obtaining the data and making it ready to use it is time for a baseline model, this is not needed to be run when using the models for yourself. These scripts +are mainly made to prove make a baseline. + +Further documentation can be found in the readMe within the folder + +## models - First part: First you have the CNN model, which has 2 versions. One version is to run on google colab and the other is for jupyter notebook. To run the one for google colab you must change the data source it obtains by first hooking up your drive. To do this run the first 2 @@ -47,6 +63,13 @@ is retrieved from your files needs to be personalized. The KNN model is a more accurate model, all the setup needed is to retrieve the data in the second cell, how this is done most likely will be different for your data depending on how the data is set up. The end result of the second cell is a pandas dataframe containing all your features and a target on the last column. +Further documentation can be found in the readMe within the folder + +## visualisations +In this folder a series of scripts are used to visualise the model and make an easy to use front end. The folder contains scripts that Prepares SOC data and visualize SOC data in an interactive dashboard. + +Further documentation can be found in the readMe within the folder + ## Library Install Describes how to install all the Libraries @@ -77,8 +100,7 @@ pip install tensorflow ``` ## Student: -Quinn de Groot: -[https://github.com/DragonKiller952](github) +Quinn de Groot: [https://github.com/DragonKiller952](github) Ruben v Raaij: [https://github.com/GameModes](github) @@ -86,7 +108,7 @@ Adam Chebil: [https://github.com/AdamMC-GL](github) Guy Veenhoff: [https://github.com/AI-Gio](github) -Koen Heertum : +Koen Heertum : [https://github.com/KoenvHeertum](github) ## Code Explaination