Contract Address: 8i51XNNpGaKaj4G4nDdmQh95v4FKAxw8mhtaRoKd9tE8
A neural network–based digit recognition project using the MNIST dataset.

-
Clone the Repository
git clone [email protected]:tetsuo-ai/DigitSuo.git cd DigitSuo
-
Install Dependencies
Ubuntu/Debian:
sudo apt-get install gcc libncurses-dev zlib1g-dev doxygen graphviz
Fedora/RHEL:
sudo dnf install gcc ncurses-devel zlib-devel doxygen graphviz
Use the provided Makefile for the following targets:
-
Recognition Interface:
make
Compiles source files in
src/
into the executabledigit_recognition
. -
Training Program:
make train
Compiles
train.c
with OpenMP support into the executabletrain
. -
Documentation:
make docs
Generates HTML documentation in
docs/html/
using Doxygen. -
Clean Build Artifacts:
make clean
-
Delete Debug Log:
make clean_debug
-
Delete Debug Log:
make clean_docs
Run the training program:
./train
This will:
- Load and preprocess the MNIST dataset (files:
train-images-idx3-ubyte.gz
andtrain-labels-idx1-ubyte.gz
). - Apply data augmentation.
- Train the neural network.
- Save optimized weights to
src/weights.h
.
Run the recognition interface:
./digit_recognition
- Mouse/Arrow Keys: Draw digits
- Number Keys (0-9): Load example digits
- Enter: Submit for recognition
- C: Clear drawing
- Q: Quit application
Input Layer (784 neurons)
│
│ Dense connections with ReLU activation
│ Weights initialized using He initialization
▼
Hidden Layer (256 neurons)
│
│ Dense connections with Softmax activation
│ Xavier/Glorot initialization
▼
Output Layer (10 neurons)
-
Data Preprocessing
- Load the MNIST dataset with zlib decompression.
- Apply data augmentation:
- Random rotation (±10°)
- Random shifts (±5 pixels)
- Gaussian blur (σ = 0.3)
- Generate balanced mini-batches.
-
Optimization
- Mini-batch gradient descent with momentum.
- Learning rate decay schedule.
- Early stopping with patience.
- OpenMP for parallel processing.
- Training accuracy: >98% on the MNIST test set.
- Inference latency: <10ms per prediction.
- Memory footprint: ~2MB during inference.
- Training time: ~5 minutes on a modern CPU.
Generate the documentation using:
make docs
Start the documentation server:
python3 serve_docs.py
Access the documentation at http://localhost:8000.
.
├── Doxyfile # Doxygen configuration file
├── doxygen-custom.css # Custom CSS for documentation
├── Makefile # Build configuration
├── README.md # Project documentation
├── docs/ # Documentation directory (generated)
├── serve_docs.py # Script to serve documentation
├── train-images-idx3-ubyte.gz # MNIST training images (compressed)
├── train-labels-idx1-ubyte.gz # MNIST training labels (compressed)
├── train.c # Training program source
└── src/ # Source code for recognition interface
├── main.c
├── draw_interface.c
├── neural_net.c
├── neural_net.h
├── draw_interface.h
├── utils.c
├── utils.h
└── weights.h
Adjust neural network parameters in train.c
:
#define HIDDEN_SIZE 256 // Number of hidden neurons
#define BATCH_SIZE 64 // Training batch size
#define NUM_EPOCHS 10 // Maximum training epochs
#define BASE_LR 0.1f // Initial learning rate