Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update README.md #28

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 28 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,14 +17,30 @@ python <filename.py>
```

## Usage
- Baseline Model: Baseline Model Code
- Spectrogram Maker & Lucas Dataset Processing: Code #51404
- CNN & KNN model: code #49213 (Jupyter notebook runs whole process & Python files contains the function codes)
(Jupyter notebook runs whole process & Python files contains the function codes)
- Spectrogram Maker & Lucas Dataset Processing: /Make and clean dataset
- Baseline Model: /Baseline Model Code
- CNN & KNN model: /KNN and CNN model
- Visualisation/frond-end of the product to use the models on: /Visualisation

This code is to predict the organic corbon values based on the reflectence given of an area.
To run the program, one can first use the trained model or train the model themselves.
To train a model you use one of the notebooks in the code #49213 folder. Here you have different models one can use to train.

## make dataset
This code is used to process and clean the dataset that is used to run the model and programs. The data used comes from [https://esdac.jrc.ec.europa.eu/], Be suere
to change to location of the folders that are refered to in the code to your own location when running the programs. This may includes other scripts as well. The dataset
can also be ofcourse be of a different kind to your liking, be sure to adapt the code (mainly those that filter/clean) to fit your dataset.

Further documentation can be found in the readMe within the folder

## Baseline model
After obtaining the data and making it ready to use it is time for a baseline model, this is not needed to be run when using the models for yourself. These scripts
are mainly made to prove make a baseline.

Further documentation can be found in the readMe within the folder

## models
- First part:
First you have the CNN model, which has 2 versions. One version is to run on google colab and the other is for jupyter notebook.
To run the one for google colab you must change the data source it obtains by first hooking up your drive. To do this run the first 2
Expand All @@ -47,6 +63,13 @@ is retrieved from your files needs to be personalized.
The KNN model is a more accurate model, all the setup needed is to retrieve the data in the second cell, how this is done most likely will be different
for your data depending on how the data is set up. The end result of the second cell is a pandas dataframe containing all your features and a target on the last column.

Further documentation can be found in the readMe within the folder

## visualisations
In this folder a series of scripts are used to visualise the model and make an easy to use front end. The folder contains scripts that Prepares SOC data and visualize SOC data in an interactive dashboard.

Further documentation can be found in the readMe within the folder

## Library Install
Describes how to install all the Libraries

Expand Down Expand Up @@ -77,16 +100,15 @@ pip install tensorflow
```

## Student:
Quinn de Groot:
[https://github.com/DragonKiller952](github)
Quinn de Groot: [https://github.com/DragonKiller952](github)

Ruben v Raaij: [https://github.com/GameModes](github)

Adam Chebil: [https://github.com/AdamMC-GL](github)

Guy Veenhoff: [https://github.com/AI-Gio](github)

Koen Heertum :
Koen Heertum : [https://github.com/KoenvHeertum](github)


## Code Explaination
Expand Down