Skip to content

Commit

Permalink
Merge pull request #4 from Mateusmsouza/dev
Browse files Browse the repository at this point in the history
Dev
  • Loading branch information
Mateusmsouza authored Jun 28, 2024
2 parents 6bd19d6 + 30c2cb0 commit 783b0de
Show file tree
Hide file tree
Showing 14 changed files with 481 additions and 24 deletions.
8 changes: 8 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
test:
pytest

coverage:
coverage run -m pytest && coverage report --show-missing

lint:
black liltorch
13 changes: 12 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1 +1,12 @@
Yes
# LilTorch

![Logo](documentation/docs/images/logo-torch.png)

LilTorch is a lightweight library for Deep Learning, created entirely in Python with Numpy as its sole dependency.
This library allows you to design and understand the internals of Neural Networks without the need for C/C++ imported code and binaries.
Everything is as understandable as Python itself.

[Documentation here](https://liltorchdocs.netlify.app/)

**Note**: This library is intended for educational purposes only. It is not recommended for production use, and execution speed is not a primary focus.

274 changes: 274 additions & 0 deletions documentation/docs/Documentation/API-Reference.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,21 @@
<a id="__init__"></a>

# \_\_init\_\_

<a id="nn"></a>

# nn

<a id="nn.loss"></a>

# nn.loss

This module implements commonly used loss functions for neural networks.

Loss functions measure the difference between the model's predictions and the ground truth labels.
Minimizing the loss function during training helps the model learn accurate representations.
This module provides functions for popular loss functions like mean squared error, cross-entropy, etc.

<a id="nn.loss.MeanSquaredError"></a>

## MeanSquaredError Objects
Expand Down Expand Up @@ -57,3 +71,263 @@ Compute the gradient of the Mean Squared Error with respect to the predicted val

- `np.ndarray` - The gradient of the loss with respect to y_pred.

<a id="nn.layer"></a>

# nn.layer

<a id="nn.layer.Layer"></a>

## Layer Objects

```python
class Layer()
```

This is a abstract class

<a id="nn.network"></a>

# nn.network

<a id="nn.network.Network"></a>

## Network Objects

```python
class Network()
```

A basic neural network class for building and training multi-layer networks.

This class provides a framework for creating and using neural networks with customizable layers.
It supports adding different layer types (inherited from `liltorch.nn.layer.Layer`), performing
forward and backward passes for training, and updating layer weights using the provided learning rate.

<a id="nn.network.Network.__init__"></a>

#### \_\_init\_\_

```python
def __init__(lr: float) -> None
```

Initializes a new neural network.

**Arguments**:

- `lr` - The learning rate used for updating the weights of the layers during training. (float)

<a id="nn.network.Network.add"></a>

#### add

```python
def add(layer: Layer) -> None
```

Adds a layer to the neural network.

This method allows you to build your network by sequentially adding different layer types
(e.g., `Tanh`, `Linear`, etc.) inherited from the `Layer` class.

**Arguments**:

- `layer` - An instance of a layer class from `liltorch.nn.layer`.

<a id="nn.network.Network.forward"></a>

#### forward

```python
def forward(x: np.ndarray) -> np.ndarray
```

Performs the forward pass through the network.

This method propagates the input data (`x`) through all the layers in the network,
applying their respective forward passes sequentially.

**Arguments**:

- `x` - The input data for the network, typically a NumPy array.


**Returns**:

The output of the network after passing through all the layers. (NumPy array)

<a id="nn.network.Network.backward"></a>

#### backward

```python
def backward(error: np.ndarray)
```

Performs the backward pass for backpropagation.

This method calculates the gradients for all layers in the network using backpropagation.
It iterates through the layers in reverse order, starting from the output layer and
propagating the error signal back to the previous layers.

**Arguments**:

- `error` - The error signal from the loss function, typically a NumPy array.


**Returns**:

The updated error signal to be propagated further back in the network during training
(usually not used in the final output layer). (NumPy array)

<a id="nn.fully_connected"></a>

# nn.fully\_connected

<a id="nn.fully_connected.FullyConnectedLayer"></a>

## FullyConnectedLayer Objects

```python
class FullyConnectedLayer(Layer)
```

Fully-connected layer (dense layer) for neural networks.

This layer performs a linear transformation on the input data followed by a bias addition.
It's a fundamental building block for many neural network architectures.

During the forward pass, the input data is multiplied by the weight matrix and then added
to the bias vector. The resulting output is passed to the next layer in the network.

During the backward pass, the gradients are calculated for both the weights and biases
using backpropagation. These gradients are used to update the weights and biases
during training to improve the network's performance.

<a id="nn.fully_connected.FullyConnectedLayer.__init__"></a>

#### \_\_init\_\_

```python
def __init__(input_size, output_size)
```

Initializes a fully-connected layer.

**Arguments**:

- `input_size` - The number of neurons in the previous layer (the size of the input vector). (int)
- `output_size` - The number of neurons in this layer (the size of the output vector). (int)

<a id="nn.fully_connected.FullyConnectedLayer.forward"></a>

#### forward

```python
def forward(input_data)
```

Performs the forward pass through the layer.

This method calculates the weighted sum of the input data and the bias vector.

**Arguments**:

- `input_data` - The input data for the layer, a NumPy array of shape (batch_size, input_size).


**Returns**:

The output of the layer after applying the weights and bias, a NumPy array
of shape (batch_size, output_size).

<a id="nn.fully_connected.FullyConnectedLayer.backward"></a>

#### backward

```python
def backward(upstream_gradients, lr)
```

Performs the backward pass for backpropagation in this layer.

This method calculates the gradients for the weights, biases, and the error signal
to be propagated back to the previous layer.

**Arguments**:

- `upstream_gradients` - The gradient signal from the subsequent layer in the network
(a NumPy array of shape (batch_size, output_size)).
- `lr` - The learning rate used for updating the weights and biases during training. (float)


**Returns**:

The gradient signal to be propagated back to the previous layer in the network
(a NumPy array of shape (batch_size, input_size)).

<a id="nn.activation"></a>

# nn.activation

This module implements commonly used activation functions for neural networks.

Activation functions introduce non-linearity into the network, allowing it to learn complex patterns
in the data. This module provides functions for popular activations like ReLU, sigmoid, tanh, etc.

<a id="nn.activation.Tanh"></a>

## Tanh Objects

```python
class Tanh(Layer)
```

TanH activation layer for neural networks.

The Tanh (hyperbolic tangent) activation function introduces non-linearity into the network,
allowing it to learn complex patterns. It maps input values between -1 and 1.

This class implements the Tanh activation function for the forward and backward passes
used during neural network training.

<a id="nn.activation.Tanh.forward"></a>

#### forward

```python
def forward(input_data: np.ndarray) -> np.ndarray
```

Performs the forward pass using the Tanh activation function.

**Arguments**:

- `input_data` - A NumPy array representing the input data for this layer.


**Returns**:

A NumPy array containing the output of the Tanh activation function applied to the input data.

<a id="nn.activation.Tanh.backward"></a>

#### backward

```python
def backward(output_error: np.ndarray, learning_rate: float) -> np.ndarray
```

Calculates the gradients for the backward pass using the derivative of Tanh.

**Arguments**:

- `output_error` - The error signal propagated from the subsequent layer during backpropagation.
(A NumPy array)
- `learning_rate` - The learning rate used for updating the weights during training. (float)


**Returns**:

A NumPy array containing the error signal to be propagated back to the previous layer.

16 changes: 16 additions & 0 deletions documentation/docs/Documentation/Examples.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# Examples

In this section, you'll find a couple of Jupyter notebooks showcasing the practical usage of Liltorch.
These notebooks demonstrate how to effectively leverage the library's functionalities for various deep learning tasks (more to be added).

## MNIST Digit Classification with 95% Accuracy

This notebook exemplifies the library's capabilities by achieving 95% accuracy on the classic MNIST handwritten digit classification dataset. The notebook guides you through the process of:

Loading the MNIST Dataset: Learn how to load the MNIST dataset using appropriate libraries within your environment.
Building Your Neural Network: Discover how to construct a neural network architecture suitable for the MNIST task, taking advantage of the library's building blocks.
Training the Network: Explore the training process, including defining the loss function and training parameters.
Evaluating Performance: Witness the evaluation of the trained network's performance on the test set, demonstrating the achieved 95% accuracy.

[Notebook on colab](https://colab.research.google.com/drive/1MLxcA6DC1xxrkY_Ya1zLv-1q7EP0S0Ws?usp=sharing)

Binary file added documentation/docs/images/logo-torch.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
8 changes: 5 additions & 3 deletions documentation/docs/index.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# LilTorch
<p align="center">
<img src="images/logo-torch.png" width="100" height="100" />
</p>
[LilTorch](https://github.com/Mateusmsouza/liltorch) is a lightweight library for Deep Learning, created entirely in Python with Numpy as its sole dependency. This library allows you to design and understand the internals of Neural Networks without the need for C/C++ imported code and binaries. Everything is as understandable as Python itself.

LilTorch is a lightweight library for Deep Learning, created entirely in Python with Numpy as its sole dependency. This library allows you to design and understand the internals of Neural Networks without the need for C/C++ imported code and binaries. Everything is as understandable as Python itself.

**Note**: This library is intended for educational purposes only. It is not recommended for production use, and execution speed is not a primary focus.
**Note**: This library is intended for educational purposes only. It is not recommended for production use, and execution speed is not a primary focus.
8 changes: 7 additions & 1 deletion documentation/mkdocs.yml
Original file line number Diff line number Diff line change
@@ -1 +1,7 @@
site_name: LilTorch
site_name: LilTorch
theme:
name: mkdocs
color_mode: auto
user_color_mode_toggle: true
analytics:
g-tag: G-2K6L423DZH
Loading

0 comments on commit 783b0de

Please sign in to comment.