Last edited: 2024-04-23
This repo contains my collection of materials and random notes I take while researching and playing with Machine Learning (ML) and Artificial Neural Networks (NN). It includes topics such as numerical methods, differential calculus, artificial neural networks, libraries, implementations, materials that I ended up using during my research, and other random topics that I also found interesting. It is a work in progress and subject to constant change.
Some materials are in the form of simple files and others are organized in subdirectories (unordered list):
- sparsedynamics - mirror of sparsedynamics.zip from Brunton et al. (2016), used e.g. in SINDy [1], [2].
- nektar - Nektar++ Spectral / HP Element Framework running on Colab using udocker.
- pinn - notes and materials regarding Physics-Informed Neural Networks (PINN).
- parf - Random Forest (RF) algorithm in Fortran.
- rforest - RF algorithm in Python.
- burgers - notes related to the convection diffusion equation that is used in several examples in this repo.
- deepxde - material direct related to the DeepXDE library.
- weka - contains some links about Weka.
- horovod - (see below).
- loss - visualizing the loss landscape of a NN.
Horovod was created internally at Uber to make it easy to use a single-GPU training script and successfully scale it to train on many GPUs in parallel.
The horovod directory contains some Notebooks with examples:
- install-horov-tf1-sd.ipynb - Installing Horovod and TensorFlow v1 on SDumont supercomputer.
- hv-tf1-mnist.ipynb - MNIST with TensorFlow v1 using MPI through Horovod, running on SDumont.
-
In My MSc repo dedicated to my master's thesis, I trained a convolutional NN :
- The other directory contains a PyTorch example of convolutional NN training using the MNIST database, running on the Santos Dumont supercomputer, adapted from IDRIS.
-
In CAP-351 course notes I made these Notebooks :
- project1-mlp.ipynb - Multilayer Perceptron (MLP) is a fully connected class of feed-forward artificial neural network (NN).
- project2-som.ipynb - a self-organizing map or self-organizing feature map is an unsupervised machine learning technique used to produce a low-dimensional representation of a higher dimensional data set while preserving the topological structure of the data.
- project3-vae.ipynb - in machine learning, a variational auto-encoder, is an artificial neural network architecture introduced by Diederik P. Kingma and Max Welling, belonging to the families of probabilistic graphical models and variational Bayesian methods.
- project4-cnn.ipynb - a Convolutional Neural Network (CNN, or ConvNet) is a class of artificial neural network (NN), most commonly applied to analyze visual imagery.
- project5-rnn.ipynb - a Recurrent Neural Network (RNN) is a class of artificial neural networks where connections between nodes can create a cycle, allowing output from some nodes to affect subsequent input to the same nodes.
This repo is permanently under construction, so its content changes constantly. |