Skip to content

trying to fine tune a Mistral 7B on my Mac with MLX.

Notifications You must be signed in to change notification settings

wilburrito/llm-stuff

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

llm-stuff

Fine tuning a Mistral-7b model from hf mlx, and then deploying it as a Telegram bot.

Inspired by: https://www.youtube.com/watch?v=3PIqhdRzhxE

Local Fine-tuning on Mac (QLoRA with MLX)

Code hacked from here: https://github.com/ml-explore/mlx-examples/tree/main/lora

How to Setup

  1. Clone repo
  2. Create Python env
python -m venv mlx-env
  1. Activate env (bash/zsh)
source mlx-env/bin/activate
  1. Install requirements
pip install -r requirements.txt

Note: MLX has the following requirements. More info here.

  • Using an M series chip (Apple silicon)
  • Using a native Python >= 3.8
  • macOS >= 13.5 (MacOS 14 recommended)

About

trying to fine tune a Mistral 7B on my Mac with MLX.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published