👋🤗🤗👋 Join our WeChat.
中文 | English
This is the repo for the Efficient Finetuning of Quantized LLMs
project, which aims to build and share instruction-following Chinese baichuan-7b/LLaMA/Pythia/GLM
model tuning methods which can be trained on a single Nvidia RTX-2080TI, multi-round chatbot which can be trained on a single Nvidia RTX-3090 with the context len 2048.
We uses bitsandbytes for quantization and is integrated with Huggingface's PEFT and transformers libraries.
- [23/07/20] Now we support training the LLaMA-2 models in this repo. Try
--model_name_or_path Llama-2-7b-hf
argument to use the LLaMA-2 model. - [23/07/12] Now we support training the Baichuan-13B model in this repo. Try
--model_name_or_path path_to_baichuan_model
and--lora_target W_pack
arguments to train the Baichuan-13B model. - [23/07/03] Now we support training the Falcon-7B/40B models in this repo. Try
--model_name_or_path tiiuae/falcon-7b
and--lora_target query_key_value
arguments to use the Falcon model. - [23/06/25] We release the supervised finetune baichuan-7B model ( GaussianTech/baichuan-7b-sft ) and the corresponding training script.
- [23/06/24] We release the supervised finetune llama-7B model (GaussianTech/llama-7b-sft ) and the corresponding training script.
- [23/06/15] Now we support training the baichuan-7B model in this repo. Try
--model_name_or_path baichuan-inc/baichuan-7B
to use the baichuan-7B model. - [23/06/03] Now we support quantized training and inference (aka QLoRA). Try
scripts/qlora_finetune/finetune_llama_guanaco7b.sh
and set--bits 4/8
argument to work with quantized model. - [23/05/25] Now we support Lora training and inference. Try
scripts/lora_finetune/lora-finetune_alpaca.sh
to finetune the LLAMA model with Lora on the Alpaca dataset. - [23/05/20] Now we support full-parameter tuning and partial-parameter tuning. Try
scripts/full_finetune/full-finetune_alpaca.sh
to full finetune the LLAMA model on the Alpaca dataset.
- LLaMA (7B/13B/33B/65B)
- LLama2 (7B/13B/33B/70B)
- BLOOM & BLOOMZ (560M/1.1B/1.7B/3B/7.1B/176B)
- baichuan-7B (7B) baichuan-13B (13B)
- OPT (125M/350M/1.3B/2.7B/6.7B/66B )
- (Continually) pre-training
- Supervised fine-tuning
As of now, we support the following datasets, most of which are all available in the Hugging Face datasets library.
-
For supervised fine-tuning:
- Stanford Alpaca
- Stanford Alpaca (Chinese)
- Hello-SimpleAI/HC3
- BELLE 2M (zh)
- BELLE 1M (zh)
- BELLE 0.5M (zh)
- BELLE Dialogue 0.4M (zh)
- BELLE School Math 0.25M (zh)
- BELLE Multiturn Chat 0.8M (zh)
- databricks-dolly-15k
- mosaicml/dolly_hhrlhf
- GPT-4 Generated Data
- Alpaca CoT
- UltraChat
- OpenAssistant/oasst1
- ShareGPT_Vicuna_unfiltered
- BIAI/OL-CC
- timdettmers/openassistant-guanaco
- Evol-Instruct
-
For reward model training:
Please refer to data/README.md to learn how to use these datasets. If you want to explore more datasets, please refer to the awesome-instruction-datasets. As default, we use the Standford Alpaca dataset for training and evaluation.
Some datasets require confirmation before using them, so we recommend logging in with your Hugging Face account using these commands.
pip install --upgrade huggingface_hub
huggingface-cli login
We provide a number of data preprocessing tools in the data folder. These tools are intended to be a starting point for further research and development.
- data_utils.py : Data preprocessing and formatting
- sft_dataset.py : Supervised fine-tuning dataset class and collator
- conv_dataset.py : Conversation dataset class and collator
We provide a number of models in the Hugging Face model hub. These models are trained with QLoRA and can be used for inference and finetuning. We provide the following models:
Base Model | Adapter | Instruct Datasets | Train Script | Log | Model on Huggingface |
---|---|---|---|---|---|
llama-7b | FullFinetune | - | - | - | |
llama-7b | QLoRA | openassistant-guanaco | finetune_lamma7b | wandb log | GaussianTech/llama-7b-sft |
llama-7b | QLoRA | OL-CC | finetune_lamma7b | ||
baichuan7b | QLoRA | openassistant-guanaco | finetune_baichuan7b | wandb log | GaussianTech/baichuan-7b-sft |
baichuan7b | QLoRA | OL-CC | finetune_baichuan7b | wandb log | - |
-
CUDA >= 11.0
-
Python 3.8+ and PyTorch 1.13.1+
-
🤗Transformers, Datasets, Accelerate, PEFT and bitsandbytes
-
jieba, rouge_chinese and nltk (used at evaluation)
-
gradio (used in gradio_webserver.py)
To load models in 4bits with transformers and bitsandbytes, you have to install accelerate and transformers from source and make sure you have the latest version of the bitsandbytes library (0.39.0). You can achieve the above with the following commands:
pip install -q -U bitsandbytes
pip install -q -U git+https://github.com/huggingface/transformers.git
pip install -q -U git+https://github.com/huggingface/peft.git
pip install -q -U git+https://github.com/huggingface/accelerate.git
Clone this repository and navigate to the Efficient-Tuning-LLMs folder
git clone https://github.com/jianzhnie/Efficient-Tuning-LLMs.git
cd Efficient-Tuning-LLMs
main function | Useage | Scripts |
---|---|---|
train.py | Full finetune LLMs on SFT datasets | full_finetune |
train_lora.py | Finetune LLMs by using Lora (Low-Rank Adaptation of Large Language Models finetune) | lora_finetune |
train_qlora.py | Finetune LLMs by using QLora (QLoRA: Efficient Finetuning of Quantized LLMs) | qlora_finetune |
The train_qlora.py
code is a starting point for finetuning and inference on various datasets.
Basic command for finetuning a baseline model on the Alpaca dataset:
python train_qlora.py --model_name_or_path <path_or_name>
For models larger than 13B, we recommend adjusting the learning rate:
python train_qlora.py –learning_rate 0.0001 --model_name_or_path <path_or_name>
We can also tweak our hyperparameters:
python train_qlora.py \
--model_name_or_path ~/checkpoints/baichuan7b \
--dataset_cfg ./data/alpaca_zh_pcyn.yaml \
--output_dir ./work_dir/oasst1-baichuan-7b \
--num_train_epochs 4 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--gradient_accumulation_steps 8 \
--evaluation_strategy steps \
--eval_steps 50 \
--save_strategy steps \
--save_total_limit 5 \
--save_steps 100 \
--logging_strategy steps \
--logging_steps 1 \
--learning_rate 0.0002 \
--warmup_ratio 0.03 \
--weight_decay 0.0 \
--lr_scheduler_type constant \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--max_new_tokens 32 \
--source_max_len 512 \
--target_max_len 512 \
--lora_r 64 \
--lora_alpha 16 \
--lora_dropout 0.1 \
--double_quant \
--quant_type nf4 \
--fp16 \
--bits 4 \
--gradient_checkpointing \
--trust_remote_code \
--do_train \
--do_eval \
--sample_generate \
--data_seed 42 \
--seed 0
To find more scripts for finetuning and inference, please refer to the scripts
folder.
Quantization parameters are controlled from the BitsandbytesConfig
(see HF documenation) as follows:
- Loading in 4 bits is activated through
load_in_4bit
- The datatype used for the linear layer computations with
bnb_4bit_compute_dtype
- Nested quantization is activated through
bnb_4bit_use_double_quant
- The datatype used for qunatization is specified with
bnb_4bit_quant_type
. Note that there are two supported quantization datatypesfp4
(four bit float) andnf4
(normal four bit float). The latter is theoretically optimal for normally distributed weights and we recommend usingnf4
.
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path='/name/or/path/to/your/model',
load_in_4bit=True,
device_map='auto',
max_memory=max_memory,
torch_dtype=torch.bfloat16,
quantization_config=BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type='nf4'
),
)
We provide two Google Colab notebooks to demonstrate the use of 4bit models in inference and fine-tuning. These notebooks are intended to be a starting point for further research and development.
- Basic usage Google Colab notebook - This notebook shows how to use 4bit models in inference with all their variants, and how to run GPT-neo-X (a 20B parameter model) on a free Google Colab instance 🤯
- Fine tuning Google Colab notebook - This notebook shows how to fine-tune a 4bit model on a downstream task using the Hugging Face ecosystem. We show that it is possible to fine tune GPT-neo-X 20B on a Google Colab instance!
Other examples are found under the examples/ folder.
- Finetune LLama-7B (ex1)
- Finetune GPT-neo-X 20B (ex2)
You can specify the path to your dataset using the --dataset argument. If the --dataset_format argument is not set, it will default to the Alpaca format. Here are a few examples:
- Training with an alpaca format dataset:
python train_qlora.py --dataset="path/to/your/dataset"
- Training with a self-instruct format dataset:
python train_qlora.py --dataset="path/to/your/dataset" --dataset_format="self-instruct"
Multi GPU training and inference work out-of-the-box with Hugging Face's Accelerate. Note that the per_device_train_batch_size and per_device_eval_batch_size arguments are global batch sizes unlike what their name suggest.
When loading a model for training or inference on multiple GPUs you should pass something like the following to AutoModelForCausalLM.from_pretrained():
device_map = "auto"
max_memory = {i: '46000MB' for i in range(torch.cuda.device_count())}
运行下面的脚本,程序会在命令行中和你的ChatBot进行交互式的对话,在命令行中输入指示并回车即可生成回复,输入 clear
可以清空对话历史,输入 stop
终止程序。
python cli_demo.py \
--model_name_or_path ~/checkpoints/baichuan7b \ # base model
--checkpoint_dir ./work_dir/checkpoint-700 \ # 训练的模型权重
--trust_remote_code \
--double_quant \
--quant_type nf4 \
--fp16 \
--bits 4
This file reads the foundation model from the Hugging Face model hub and the LoRA weights from path/to/your/model_dir
, and runs a Gradio interface for inference on a specified input. Users should treat this as example code for the use of the model, and modify it as needed.
Example usage:
python gradio_webserver.py \
--model_name_or_path decapoda-research/llama-7b-hf \
--lora_model_name_or_path `path/to/your/model_dir`
We provide generations for the models described in the paper for both OA and Vicuna queries in the eval/generations
folder. These are intended to foster further research on model evaluation and analysis.
Can you distinguish ChatGPT from Guanaco? Give it a try! You can access the model response Colab here comparing ChatGPT and Guanaco 65B on Vicuna prompts.
Here a list of known issues and bugs. If your issue is not reported here, please open a new issue and describe the problem.
- 4-bit inference is slow. Currently, our 4-bit inference implementation is not yet integrated with the 4-bit matrix multiplication
- Resuming a LoRA training run with the Trainer currently runs on an error
- Currently, using
bnb_4bit_compute_type='fp16'
can lead to instabilities. For 7B LLaMA, only 80% of finetuning runs complete without error. We have solutions, but they are not integrated yet into bitsandbytes. - Make sure that
tokenizer.bos_token_id = 1
to avoid generation issues.
Efficient Finetuning of Quantized LLMs
is released under the Apache 2.0 license.
We thank the Huggingface team, in particular Younes Belkada, for their support integrating QLoRA with PEFT and transformers libraries.
We appreciate the work by many open-source contributors, especially:
Please cite the repo if you use the data or code in this repo.
@misc{Chinese-Guanaco,
author = {jianzhnie},
title = {Chinese-Guanaco: Efficient Finetuning of Quantized LLMs for Chinese},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/jianzhnie/Efficient-Tuning-LLMs}},
}