🏠
Working from home
Highlights
- Pro
Pinned Loading
-
CUDA_Flash_Attention2
CUDA_Flash_Attention2 PublicImplement Flash Attention v2 just from the paper in Numba JIT and CUDA
Cuda 2
-
SmolLm2-zero
SmolLm2-zero PublicForked from philschmid/deep-learning-pytorch-huggingface
Train a small LLM to "think" with not SFT, only RL
Jupyter Notebook 2
-
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.