Making evaluating and fine-tuning LLaMA models with low-rank adaptation (LoRA) easy.
Update:
On the
dev
branch, there's a new Chat UI and a new Demo Mode config as a simple and easy way to demonstrate new models.However, the new version does not have the fine-tuning feature yet and is not backward compatible as it uses a new way to define how models are loaded, and also a new format of prompt templates (from LangChain).
For more info, see: #28.
LLM.Tuner.Chat.UI.in.Demo.Mode.mp4
See a demo on Hugging Face *Only serves UI demonstration. To try training or text generation, run on Colab.
- 1-click up and running in Google Colab with a standard GPU runtime.
- Loads and stores data in Google Drive.
- Evaluate various LLaMA LoRA models stored in your folder or from Hugging Face.
- Switch between base models such as
decapoda-research/llama-7b-hf
,nomic-ai/gpt4all-j
,databricks/dolly-v2-7b
,EleutherAI/gpt-j-6b
, orEleutherAI/pythia-6.9b
. - Fine-tune LLaMA models with different prompt templates and training dataset format.
- Load JSON and JSONL datasets from your folder, or even paste plain text directly into the UI.
- Supports Stanford Alpaca seed_tasks, alpaca_data and OpenAI "prompt"-"completion" format.
- Use prompt templates to keep your dataset DRY.
There are various ways to run this app:
- Run on Google Colab: The simplest way to get started, all you need is a Google account. Standard (free) GPU runtime is sufficient to run generation and training with micro batch size of 8. However, the text generation and training is much slower than on other cloud services, and Colab might terminate the execution in inactivity while running long tasks.
- Run on a cloud service via SkyPilot: If you have a cloud service (Lambda Labs, GCP, AWS, or Azure) account, you can use SkyPilot to run the app on a cloud service. A cloud bucket can be mounted to preserve your data.
- Run locally: Depends on the hardware you have.
See video for step-by-step instructions.
Open this Colab Notebook and select Runtime > Run All (โ/Ctrl+F9
).
You will be prompted to authorize Google Drive access, as Google Drive will be used to store your data. See the "Config"/"Google Drive" section for settings and more info.
After approximately 5 minutes of running, you will see the public URL in the output of the "Launch"/"Start Gradio UI ๐" section (like Running on public URL: https://xxxx.gradio.live
). Open the URL in your browser to use the app.
After following the installation guide of SkyPilot, create a .yaml
to define a task for running the app:
# llm-tuner.yaml
resources:
accelerators: A10:1 # 1x NVIDIA A10 GPU, about US$ 0.6 / hr on Lambda Cloud. Run `sky show-gpus` for supported GPU types, and `sky show-gpus [GPU_NAME]` for the detailed information of a GPU type.
cloud: lambda # Optional; if left out, SkyPilot will automatically pick the cheapest cloud.
file_mounts:
# Mount a presisted cloud storage that will be used as the data directory.
# (to store train datasets trained models)
# See https://skypilot.readthedocs.io/en/latest/reference/storage.html for details.
/data:
name: llm-tuner-data # Make sure this name is unique or you own this bucket. If it does not exists, SkyPilot will try to create a bucket with this name.
store: s3 # Could be either of [s3, gcs]
mode: MOUNT
# Clone the LLaMA-LoRA Tuner repo and install its dependencies.
setup: |
conda create -q python=3.8 -n llm-tuner -y
conda activate llm-tuner
# Clone the LLaMA-LoRA Tuner repo and install its dependencies
[ ! -d llm_tuner ] && git clone https://github.com/zetavg/LLaMA-LoRA-Tuner.git llm_tuner
echo 'Installing dependencies...'
pip install -r llm_tuner/requirements.lock.txt
# Optional: install wandb to enable logging to Weights & Biases
pip install wandb
# Optional: patch bitsandbytes to workaround error "libbitsandbytes_cpu.so: undefined symbol: cget_col_row_stats"
BITSANDBYTES_LOCATION="$(pip show bitsandbytes | grep 'Location' | awk '{print $2}')/bitsandbytes"
[ -f "$BITSANDBYTES_LOCATION/libbitsandbytes_cpu.so" ] && [ ! -f "$BITSANDBYTES_LOCATION/libbitsandbytes_cpu.so.bak" ] && [ -f "$BITSANDBYTES_LOCATION/libbitsandbytes_cuda121.so" ] && echo 'Patching bitsandbytes for GPU support...' && mv "$BITSANDBYTES_LOCATION/libbitsandbytes_cpu.so" "$BITSANDBYTES_LOCATION/libbitsandbytes_cpu.so.bak" && cp "$BITSANDBYTES_LOCATION/libbitsandbytes_cuda121.so" "$BITSANDBYTES_LOCATION/libbitsandbytes_cpu.so"
conda install -q cudatoolkit -y
echo 'Dependencies installed.'
# Optional: Install and setup Cloudflare Tunnel to expose the app to the internet with a custom domain name
[ -f /data/secrets/cloudflared_tunnel_token.txt ] && echo "Installing Cloudflare" && curl -L --output cloudflared.deb https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb && sudo dpkg -i cloudflared.deb && sudo cloudflared service uninstall || : && sudo cloudflared service install "$(cat /data/secrets/cloudflared_tunnel_token.txt | tr -d '\n')"
# Optional: pre-download models
echo "Pre-downloading base models so that you won't have to wait for long once the app is ready..."
python llm_tuner/download_base_model.py --base_model_names='decapoda-research/llama-7b-hf,nomic-ai/gpt4all-j'
# Start the app. `hf_access_token`, `wandb_api_key` and `wandb_project` are optional.
run: |
conda activate llm-tuner
python llm_tuner/app.py \
--data_dir='/data' \
--hf_access_token="$([ -f /data/secrets/hf_access_token.txt ] && cat /data/secrets/hf_access_token.txt | tr -d '\n')" \
--wandb_api_key="$([ -f /data/secrets/wandb_api_key.txt ] && cat /data/secrets/wandb_api_key.txt | tr -d '\n')" \
--wandb_project='llm-tuner' \
--timezone='Atlantic/Reykjavik' \
--base_model='decapoda-research/llama-7b-hf' \
--base_model_choices='decapoda-research/llama-7b-hf,nomic-ai/gpt4all-j,databricks/dolly-v2-7b' \
--share
Then launch a cluster to run the task:
sky launch -c llm-tuner llm-tuner.yaml
-c ...
is an optional flag to specify a cluster name. If not specified, SkyPilot will automatically generate one.
You will see the public URL of the app in the terminal. Open the URL in your browser to use the app.
Note that exiting sky launch
will only exit log streaming and will not stop the task. You can use sky queue --skip-finished
to see the status of running or pending tasks, sky logs <cluster_name> <job_id>
connect back to log streaming, and sky cancel <cluster_name> <job_id>
to stop a task.
When you are done, run sky stop <cluster_name>
to stop the cluster. To terminate a cluster instead, run sky down <cluster_name>
.
Remember to stop or shutdown the cluster when you are done to avoid incurring unexpected charges. Run sky cost-report
to see the cost of your clusters.
Log into the cloud machine or mount the filesystem of the cloud machine on your local computer
To log into the cloud machine, run ssh <cluster_name>
, such as ssh llm-tuner
.
If you have sshfs
installed on your local machine, you can mount the filesystem of the cloud machine on your local computer by running a command like the following:
mkdir -p /tmp/llm_tuner_server && umount /tmp/llm_tuner_server || : && sshfs llm-tuner:/ /tmp/llm_tuner_server
Prepare environment with conda
conda create -y python=3.8 -n llm-tuner
conda activate llm-tuner
pip install -r requirements.lock.txt
python app.py --data_dir='./data' --base_model='decapoda-research/llama-7b-hf' --timezone='Atlantic/Reykjavik' --share
You will see the local and public URLs of the app in the terminal. Open the URL in your browser to use the app.
For more options, see python app.py --help
.
UI development mode
To test the UI without loading the language model, use the --ui_dev_mode
flag:
python app.py --data_dir='./data' --base_model='decapoda-research/llama-7b-hf' --share --ui_dev_mode
To use Gradio Auto-Reloading, a
config.yaml
file is required since command line arguments are not supported. There's a sample file to start with:cp config.yaml.sample config.yaml
. Then, just rungradio app.py
.
See video on YouTube.
TBC