Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[OV] Update optimization docs page with information about VLMs #1007

Merged
merged 11 commits into from
Nov 26, 2024
120 changes: 83 additions & 37 deletions docs/source/openvino/export.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -28,70 +28,108 @@ optimum-cli export openvino --model local_llama --task text-generation-with-past
Check out the help for more options:

```bash
optimum-cli export openvino --help

usage: optimum-cli export openvino [-h] -m MODEL [--task TASK] [--framework {pt,tf}] [--trust-remote-code] [--weight-format {fp32,fp16,int8,int4}]
[--library {transformers,diffusers,timm,sentence_transformers}] [--cache_dir CACHE_DIR] [--pad-token-id PAD_TOKEN_ID] [--ratio RATIO] [--sym]
[--group-size GROUP_SIZE] [--dataset DATASET] [--all-layers] [--awq] [--scale-estimation] [--sensitivity-metric SENSITIVITY_METRIC] [--num-samples NUM_SAMPLES]
[--disable-stateful] [--disable-convert-tokenizer]
usage: optimum-cli export openvino [-h] -m MODEL [--task TASK] [--framework {pt,tf}] [--trust-remote-code]
[--weight-format {fp32,fp16,int8,int4,mxfp4,nf4}]
[--library {transformers,diffusers,timm,sentence_transformers,open_clip}]
[--cache_dir CACHE_DIR] [--pad-token-id PAD_TOKEN_ID] [--ratio RATIO] [--sym]
[--group-size GROUP_SIZE] [--dataset DATASET] [--all-layers] [--awq]
[--scale-estimation] [--gptq] [--sensitivity-metric SENSITIVITY_METRIC]
[--num-samples NUM_SAMPLES] [--disable-stateful] [--disable-convert-tokenizer]
output

optional arguments:
-h, --help show this help message and exit

Required arguments:
--model MODEL Model ID on huggingface.co or path on disk to load model from.

-m MODEL, --model MODEL
Model ID on huggingface.co or path on disk to load model from.
output Path indicating the directory where to store the generated OV model.

Optional arguments:
--task TASK The task to export the model for. If not specified, the task will be auto-inferred based on the model. Available tasks depend on the model, but are among: ['image-segmentation',
'feature-extraction', 'mask-generation', 'audio-classification', 'conversational', 'stable-diffusion-xl', 'question-answering', 'sentence-similarity', 'text2text-generation',
'masked-im', 'automatic-speech-recognition', 'fill-mask', 'image-to-text', 'text-generation', 'zero-shot-object-detection', 'multiple-choice', 'object-detection', 'stable-
diffusion', 'audio-xvector', 'text-to-audio', 'zero-shot-image-classification', 'token-classification', 'image-classification', 'depth-estimation', 'image-to-image', 'audio-
frame-classification', 'semantic-segmentation', 'text-classification']. For decoder models, use `xxx-with-past` to export the model using past key values in the decoder.
--framework {pt,tf} The framework to use for the export. If not provided, will attempt to use the local checkpoints original framework or what is available in the environment.
--trust-remote-code Allows to use custom code for the modeling hosted in the model repository. This option should only be set for repositories you trust and in which you have read the code, as it
will execute on your local machine arbitrary code present in the model repository.
--weight-format {fp32,fp16,int8,int4}
--task TASK The task to export the model for. If not specified, the task will be auto-inferred based on
the model. Available tasks depend on the model, but are among: ['fill-mask', 'masked-im',
'audio-classification', 'automatic-speech-recognition', 'text-to-audio', 'image-text-to-text',
'depth-estimation', 'image-to-image', 'text-generation', 'text-to-image', 'mask-generation',
'audio-frame-classification', 'sentence-similarity', 'image-classification', 'multiple-
choice', 'text-classification', 'text2text-generation', 'token-classification', 'feature-
extraction', 'zero-shot-image-classification', 'zero-shot-object-detection', 'object-
detection', 'inpainting', 'question-answering', 'semantic-segmentation', 'image-segmentation',
'audio-xvector', 'image-to-text']. For decoder models, use `xxx-with-past` to export the model
using past key values in the decoder.
--framework {pt,tf} The framework to use for the export. If not provided, will attempt to use the local
checkpoint's original framework or what is available in the environment.
--trust-remote-code Allows to use custom code for the modeling hosted in the model repository. This option should
only be set for repositories you trust and in which you have read the code, as it will execute
on your local machine arbitrary code present in the model repository.
--weight-format {fp32,fp16,int8,int4,mxfp4,nf4}
The weight format of the exported model.
--library {transformers,diffusers,timm,sentence_transformers}
The library used to load the model before export. If not provided, will attempt to infer the local checkpoints library.
--library {transformers,diffusers,timm,sentence_transformers,open_clip}
The library used to load the model before export. If not provided, will attempt to infer the
local checkpoint's library
--cache_dir CACHE_DIR
The path to a directory in which the downloaded model should be cached if the standard cache should not be used.
The path to a directory in which the downloaded model should be cached if the standard cache
should not be used.
--pad-token-id PAD_TOKEN_ID
This is needed by some models, for some tasks. If not provided, will attempt to use the tokenizer to guess it.
--ratio RATIO A parameter used when applying 4-bit quantization to control the ratio between 4-bit and 8-bit quantization. If set to 0.8, 80% of the layers will be quantized to int4 while
20% will be quantized to int8. This helps to achieve better accuracy at the sacrifice of the model size and inference latency. Default value is 1.0.
This is needed by some models, for some tasks. If not provided, will attempt to use the
tokenizer to guess it.
--ratio RATIO A parameter used when applying 4-bit quantization to control the ratio between 4-bit and 8-bit
quantization. If set to 0.8, 80% of the layers will be quantized to int4 while 20% will be
quantized to int8. This helps to achieve better accuracy at the sacrifice of the model size
and inference latency. Default value is 1.0.
--sym Whether to apply symmetric quantization
--group-size GROUP_SIZE
The group size to use for int4 quantization. Recommended value is 128 and -1 will results in per-column quantization.
--dataset DATASET The dataset used for data-aware compression or quantization with NNCF. You can use the one from the list ['wikitext2','c4','c4-new'] for language models or
['conceptual_captions','laion/220k-GPT4Vision-captions-from-LIVIS','laion/filtered-wit'] for diffusion models.
--all-layers Whether embeddings and last MatMul layers should be compressed to INT4. If not provided an weight compression is applied, they are compressed to INT8.
--awq Whether to apply AWQ algorithm. AWQ improves generation quality of INT4-compressed LLMs, but requires additional time for tuning weights on a calibration dataset. To run AWQ,
please also provide a dataset argument. Note: it is possible that there will be no matching patterns in the model to apply AWQ, in such case it will be skipped.
--scale-estimation Indicates whether to apply a scale estimation algorithm that minimizes the L2 error between the original and compressed layers. Providing a dataset is required to run scale
estimation. Please note, that applying scale estimation takes additional memory and time.
The group size to use for quantization. Recommended value is 128 and -1 uses per-column
quantization.
--dataset DATASET The dataset used for data-aware compression or quantization with NNCF. You can use the one
from the list ['wikitext2','c4','c4-new'] for language models or
['conceptual_captions','laion/220k-GPT4Vision-captions-from-LIVIS','laion/filtered-wit'] for
diffusion models.
--all-layers Whether embeddings and last MatMul layers should be compressed to INT4. If not provided an
weight compression is applied, they are compressed to INT8.
--awq Whether to apply AWQ algorithm. AWQ improves generation quality of INT4-compressed LLMs, but
requires additional time for tuning weights on a calibration dataset. To run AWQ, please also
provide a dataset argument. Note: it is possible that there will be no matching patterns in the
model to apply AWQ, in such case it will be skipped.
--scale-estimation Indicates whether to apply a scale estimation algorithm that minimizes the L2 error between
the original and compressed layers. Providing a dataset is required to run scale estimation.
Please note, that applying scale estimation takes additional memory and time.
--gptq Indicates whether to apply GPTQ algorithm that optimizes compressed weights in a layer-wise
fashion to minimize the difference between activations of a compressed and original layer.
Please note, that applying GPTQ takes additional memory and time.
--sensitivity-metric SENSITIVITY_METRIC
The sensitivity metric for assigning quantization precision to layers. Can be one of the following: ['weight_quantization_error', 'hessian_input_activation',
The sensitivity metric for assigning quantization precision to layers. It can be one of the
following: ['weight_quantization_error', 'hessian_input_activation',
'mean_activation_variance', 'max_activation_variance', 'mean_activation_magnitude'].
--num-samples NUM_SAMPLES
The maximum number of samples to take from the dataset for quantization.
--disable-stateful Disable stateful converted models, stateless models will be generated instead. Stateful models are produced by default when this key is not used. In stateful models all kv-cache
inputs and outputs are hidden in the model and are not exposed as model inputs and outputs. If --disable-stateful option is used, it may result in sub-optimal inference
performance. Use it when you intentionally want to use a stateless model, for example, to be compatible with existing OpenVINO native inference code that expects kv-cache inputs
and outputs in the model.
--disable-stateful Disable stateful converted models, stateless models will be generated instead. Stateful models
are produced by default when this key is not used. In stateful models all kv-cache inputs and
outputs are hidden in the model and are not exposed as model inputs and outputs. If --disable-
stateful option is used, it may result in sub-optimal inference performance. Use it when you
intentionally want to use a stateless model, for example, to be compatible with existing
OpenVINO native inference code that expects KV-cache inputs and outputs in the model.
--disable-convert-tokenizer
Do not add converted tokenizer and detokenizer OpenVINO models.
```

You can also apply fp16, 8-bit or 4-bit weight-only quantization on the Linear, Convolutional and Embedding layers when exporting your model by setting `--weight-format` to respectively `fp16`, `int8` or `int4`:
You can also apply fp16, 8-bit or 4-bit weight-only quantization on the Linear, Convolutional and Embedding layers when exporting your model by setting `--weight-format` to respectively `fp16`, `int8` or `int4`.

Export with INT8 weights compression:
```bash
optimum-cli export openvino --model meta-llama/Meta-Llama-3-8B --weight-format int8 ov_model/
```

Export with INT4 weights compression:
```bash
optimum-cli export openvino --model meta-llama/Meta-Llama-3-8B --weight-format int4 ov_model/
```

Export with INT4 weights compression and a data-aware AWQ and Scale Estimation algorithms:
```bash
optimum-cli export openvino --model meta-llama/Meta-Llama-3-8B \
--weight-format int4 --awq --scale-estimation --dataset wikitext2 ov_model/
```

For more information on the quantization parameters checkout the [documentation](inference#weight-only-quantization)


Expand Down Expand Up @@ -128,6 +166,14 @@ To export your Stable Diffusion XL model to the OpenVINO IR format with the CLI
optimum-cli export openvino --model stabilityai/stable-diffusion-xl-base-1.0 ov_sdxl/
```

You can also apply hybrid quantization during model export. For example:
```bash
optimum-cli export openvino --model stabilityai/stable-diffusion-xl-base-1.0 \
--weight-format int8 --dataset conceptual_captions ov_sdxl/
```

For more information about hybrid quantization, take a look at this jupyter [notebook](https://github.com/huggingface/optimum-intel/blob/main/notebooks/openvino/stable_diffusion_hybrid_quantization.ipynb).

## When loading your model

You can also load your PyTorch checkpoint and convert it to the OpenVINO format on-the-fly, by setting `export=True` when loading your model.
Expand Down
53 changes: 42 additions & 11 deletions docs/source/openvino/optimization.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -30,29 +30,37 @@ Quantization can be applied on the model's Linear, Convolutional and Embedding l

#### 8-bit

For the 8-bit weight quantization you can set `load_in_8bit=True` to load your model's weights in 8-bit:
For the 8-bit weight quantization you can provide `quantization_config` equal to `OVWeightQuantizationConfig(bits=8)` to load your model's weights in 8-bit:

```python
from optimum.intel import OVModelForCausalLM
from optimum.intel import OVModelForCausalLM, OVWeightQuantizationConfig

model_id = "helenai/gpt2-ov"
model = OVModelForCausalLM.from_pretrained(model_id, load_in_8bit=True)
quantization_config = OVWeightQuantizationConfig(bits=8)
model = OVModelForCausalLM.from_pretrained(model_id, quantization_config=quantization_config)

# Saves the int8 model that will be x4 smaller than its fp32 counterpart
model.save_pretrained(saving_directory)
```

Weights of language models inside vision-language pipelines can be quantized in a similar way:
```python
model = OVModelForVisualCausalLM.from_pretrained(
"llava-hf/llava-v1.6-mistral-7b-hf",
quantization_config=quantization_config
)
```

<Tip warning={true}>

If not specified, `load_in_8bit` will be set to `True` by default when models larger than 1 billion parameters are exported to the OpenVINO format (with `export=True`). You can disable it with `load_in_8bit=False`.
If quantization_config is not provided, model will be exported in 8 bits by default when it has more than 1 billion parameters. You can disable it with `load_in_8bit=False`.

</Tip>

You can also provide a `quantization_config` instead to specify additional optimization parameters.

#### 4-bit

For the 4-bit weight quantization, you need a `quantization_config` to define the optimization parameters, for example:
4-bit weight quantization can be achieved in a similar way:

```python
from optimum.intel import OVModelForCausalLM, OVWeightQuantizationConfig
Expand All @@ -61,10 +69,24 @@ quantization_config = OVWeightQuantizationConfig(bits=4)
model = OVModelForCausalLM.from_pretrained(model_id, quantization_config=quantization_config)
```

Or for vision-language pipelines:
```python
model = OVModelForVisualCausalLM.from_pretrained(
"llava-hf/llava-v1.6-mistral-7b-hf",
quantization_config=quantization_config
)
```

You can tune quantization parameters to achieve a better performance accuracy trade-off as follows:

```python
quantization_config = OVWeightQuantizationConfig(bits=4, sym=False, ratio=0.8, dataset="wikitext2")
quantization_config = OVWeightQuantizationConfig(
bits=4,
sym=False,
ratio=0.8,
quant_method="awq",
dataset="wikitext2"
)
```

By default the quantization scheme will be [asymmetric](https://github.com/openvinotoolkit/nncf/blob/develop/docs/usage/training_time_compression/other_algorithms/LegacyQuantization.md#asymmetric-quantization), to make it [symmetric](https://github.com/openvinotoolkit/nncf/blob/develop/docs/usage/training_time_compression/other_algorithms/LegacyQuantization.md#symmetric-quantization) you can add `sym=True`.
Expand All @@ -76,12 +98,21 @@ For 4-bit quantization you can also specify the following arguments in the quant
Smaller `group_size` and `ratio` values usually improve accuracy at the sacrifice of the model size and inference latency.

Quality of 4-bit weight compressed model can further be improved by employing one of the following data-dependent methods:
* AWQ which stands for Activation Aware Quantization is an algorithm that tunes model weights for more accurate 4-bit compression. It slightly improves generation quality of compressed LLMs, but requires significant additional time and memory for tuning weights on a calibration dataset. Please note that it is possible that there will be no matching patterns in the model to apply AWQ, in such case it will be skipped.
* Scale Estimation is a method that tunes quantization scales to minimize the `L2` error between the original and compressed layers. Providing a dataset is required to run scale estimation. Using this method also incurs additional time and memory overhead.
* **AWQ** which stands for Activation Aware Quantization is an algorithm that tunes model weights for more accurate 4-bit compression. It slightly improves generation quality of compressed LLMs, but requires significant additional time and memory for tuning weights on a calibration dataset. Please note that it is possible that there will be no matching patterns in the model to apply AWQ, in such case it will be skipped.
* **Scale Estimation** is a method that tunes quantization scales to minimize the `L2` error between the original and compressed layers. Providing a dataset is required to run scale estimation. Using this method also incurs additional time and memory overhead.
* **GPTQ** optimizes compressed weights in a layer-wise fashion to minimize the difference between activations of a compressed and original layer.

AWQ and Scale Estimation algorithms can be applied together or separately. For that, provide corresponding arguments to the 4-bit `OVWeightQuantizationConfig` together with a dataset. For example:
Data-aware algorithms can be applied together or separately. For that, provide corresponding arguments to the 4-bit `OVWeightQuantizationConfig` together with a dataset. For example:
```python
quantization_config = OVWeightQuantizationConfig(bits=4, sym=False, ratio=0.8, quant_method="awq", scale_estimation=True, dataset="wikitext2")
quantization_config = OVWeightQuantizationConfig(
bits=4,
sym=False,
ratio=0.8,
quant_method="awq",
scale_estimation=True,
gptq=True,
dataset="wikitext2"
)
```

### Static quantization
Expand Down
Loading