Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
BradenEverson authored Feb 12, 2024
1 parent db93d1a commit 6b4dcac
Showing 1 changed file with 20 additions and 1 deletion.
21 changes: 20 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,9 +24,9 @@ or add the dependency directly in your **cargo.toml** file
[dependencies]
unda = "{version}"
```

## Usage

### Dense Network
```rust
use unda::core::network::Network;
use unda::core::layer::{methods::activations::Activations, layers::{LayerTypes, InputTypes}};
Expand Down Expand Up @@ -56,6 +56,25 @@ fn main() {
}
```


## Examples
The unda repository hosts a plethora of example ML models to compute a series of common problems. These examples can be found in the ```/examples``` folder and can be run by entering:

```
cargo run --release --example {example_name}
```
where ```example_name``` is the name of the file/folder you wish to run, omitting the .rs

### Currently, Unda has example implementations for XoR, MNIST and a [breast cancer model from Kaggle](https://www.kaggle.com/datasets/yasserh/breast-cancer-dataset)

**Important**! When using running the MNIST example, please make sure to put the appropriate ubyte files into the /src/util/mnist directory of this repository. We are currently working on using reqwest to automatically build the dataset, but for now it must be done manually

Here are google drive links to the necessary ubyte files
- [labels](https://drive.google.com/file/d/191BR4awTN-XvIISPeB4_zaHJZ0EgC4-o/view?usp=drive_link)
- [images](https://drive.google.com/file/d/1vsltbfn7D3ZYFmAhN2fexomUaqr5oG6P/view?usp=drive_link)

## Implications for the future of ML

Using the built in **Input** trait, practically any data type can be mapped to an input for a neural network without the need for cutting corners, and the inner trait for layers allows for a plug and play style to neural network development. Currently, Unda has full support for Dense layers, Adam Optimization for Backprop, Activation functions (Sigmoid, TanH, ReLU and LeakyReLU), and even loss analysis per model and per layer.

Gradient descent currently can happen both syncronously as stochastic gradient descent or asynchronously through minibatch gradient descent.
Expand Down

0 comments on commit 6b4dcac

Please sign in to comment.