This document describes the step-by-step instructions to run large language models (LLMs) on 4th Gen Intel® Xeon® Scalable Processor (codenamed Sapphire Rapids) with PyTorch and Intel® Extension for PyTorch.
The scripts run_clm.py
, run_mlm.py
and run_plm.py
provide three quantization approaches respectively (dynamic, static, qat) based on Intel® Neural Compressor and return last token prediction accuracy by trainer
.
The large language model quantization is moved to text-generation now.
# Installation
git clone https://github.com/intel/intel-extension-for-transformers.git itrex
cd itrex
pip install -r requirements.txt
pip install -v .
cd examples/huggingface/pytorch/language-modeling/quantization
pip install -r requirements.txt
Here is how to run the scripts:
Causal Language Modeling (CLM)
python run_clm.py \
--model_name_or_path EleutherAI/gpt-neo-125M \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--tune \
--quantization_approach static \
--do_train \
--do_eval \
--output_dir ./tmp/clm_output \
--overwrite_output_dir
Masked Language Modeling (MLM)
python run_mlm.py \
--model_name_or_path bert-base-uncased \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--tune \
--quantization_approach static \
--do_train \
--do_eval \
--output_dir ./tmp/mlm_output \
--overwrite_output_dir
Permutation Language Modeling (PLM)
python run_plm.py \
--model_name_or_path xlnet-base-cased \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--tune \
--quantization_approach static \
--do_train \
--do_eval \
--output_dir ./tmp/plm_output \
--overwrite_output_dir