Skip to content

Tensor-Reloaded/Advanced-Topics-in-Neural-Networks-Template-2024

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

46 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

image_clipdrop-enhance

Repository for the Advanced Topics in Neural Networks laboratory, "Alexandru Ioan Cuza" University, Faculty of Computer Science, Master degree.

Environment setup

Google Colab: PyTorch, Pandas, Numpy, Tensorboard, and Matplotlib are already available. Wandb can be easily installed using pip install wandb.

Local instalation:

  1. Create a Python environment (using conda or venv). We recommend installing conda from Miniforge.
# Create the environment
conda create -n 312 -c conda-forge python=3.12
# activate the environment
conda activate 312
# Run this to use conda-forge as your highest priority channel (not needed if you installed conda from Miniforge)
conda config --add channels conda-forge
  1. Install PyTorch 2.4.1+ from pytorch.org using conda or pip, depending on your environment.
    • Choose the Stable Release, choose your OS, select Conda or Pip and your compute platform. For Linux and Windows, CUDA or CPU builds are available, while for Mac, only builds with CPU and MPS acceleration.
    • Example CPU: conda install pytorch torchvision torchaudio cpuonly -c pytorch.
  2. Install Tensorboard and W&B
    • conda install -c conda-forge tensorboard wandb
  3. Install Matplotlib.
    • conda install conda-forge::matplotlib

Recommended resources:

Table of contents

  • Lab01: Tensor Operations (Homework 1: Multi Layer Perceptron + Backpropagation)
  • Lab02: Convolutions, DataLoaders, Datasets, Data Augmentation techniques (Homework 2: Kaggle competition on CIFAR-100 with VGG-16)
  • Lab03: ResNets (Homework 3: Implement a complete training pipeline with PyTorch)
  • Lab04: Training pipeline implementation
  • Lab05: R-CNN, Fast R-CNN, Faster R-CNN, YOLO, U-Net, 3D U-Net, Ensemble methods, Model Soup
  • Lab07: Self-Supervised Learning, Autoencoders, VAE, GAN, Diffusion
  • Lab09: Sequence to sequence models, RNN, LSTM, Attention Is All You Need