Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MOE Quantization] Warn against "undercalibrated" modules #2262

Open
wants to merge 62 commits into
base: main
Choose a base branch
from

Conversation

dbogunowicz
Copy link
Contributor

@dbogunowicz dbogunowicz commented May 2, 2024

Note: this branch requires this PR: neuralmagic/compressed-tensors#46 to land in compressed-tensors.

Example Use:

from sparseml.transformers import SparseAutoModelForCausalLM, SparseAutoTokenizer, oneshot
import os
import torch

model_name = "Isotonic/TinyMixtral-4x248M-MoE"

model = SparseAutoModelForCausalLM.from_pretrained(
    model_name,
    device_map="cuda:0",
    torch_dtype=torch.float16,
)
tokenizer = SparseAutoTokenizer.from_pretrained(
    model_name
)

dataset = "open-platypus"
recipe = "tests/sparseml/transformers/compression/recipes/new_quant_full.yaml"

oneshot(
        model=model,
        dataset=dataset,
        overwrite_output_dir=True,
        output_dir="./output_one_shot",
        recipe=recipe,
        num_calibration_samples=4,
        pad_to_max_length=False,
        min_tokens_per_group = 0.3 
    )
2024-05-15 12:15:13 sparseml.transformers.finetune.runner INFO     *** One Shot ***
2024-05-15 12:15:14 sparseml.core.recipe.recipe INFO     Loading recipe from file tests/sparseml/transformers/compression/recipes/new_quant_full.yaml
/root/sparseml/.venv/lib/python3.10/site-packages/pydantic/_internal/_fields.py:151: UserWarning: Field "model_fuse_fn_name" has conflict with protected namespace "model_".

You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
  warnings.warn(
/root/sparseml/.venv/lib/python3.10/site-packages/pydantic/_internal/_fields.py:151: UserWarning: Field "model_fuse_fn_kwargs" has conflict with protected namespace "model_".

You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
  warnings.warn(
2024-05-15 12:15:14 sparseml.modifiers.quantization_vllm.pytorch INFO     Running vLLMQuantizationModifier calibration with 4 samples...
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00,  4.19it/s]
2024-05-15 12:15:15 sparseml.modifiers.quantization_vllm.pytorch WARNING  The module_name: model.layers.0.block_sparse_moe.experts.1.w1 received less than 30% of calibration batch tokens (212/970 tokens). This could result may harm the quantization quality.
2024-05-15 12:15:15 sparseml.modifiers.quantization_vllm.pytorch WARNING  The module_name: model.layers.0.block_sparse_moe.experts.1.w2 received less than 30% of calibration batch tokens (212/970 tokens). This could result may harm the quantization quality.
...
2024-05-15 12:15:15 sparseml.modifiers.quantization_vllm.pytorch WARNING  The module_name: model.layers.10.block_sparse_moe.experts.3.w3 received less than 30% of calibration batch tokens (233/970 tokens). This could result may harm the quantization quality.
2024-05-15 12:15:15 sparseml.modifiers.quantization_vllm.pytorch WARNING  The module_name: model.layers.11.block_sparse_moe.experts.2.w1 received less than 30% of calibration batch tokens (21/970 tokens). This could result may harm the quantization quality.
2024-05-15 12:15:15 sparseml.modifiers.quantization_vllm.pytorch WARNING  The module_name: model.layers.11.block_sparse_moe.experts.2.w2 received less than 30% of calibration batch tokens (21/970 tokens). This could result may harm the quantization quality.
2024-05-15 12:15:15 sparseml.modifiers.quantization_vllm.pytorch WARNING  The module_name: model.layers.11.block_sparse_moe.experts.2.w3 received less than 30% of calibration batch tokens (21/970 tokens). This could result may harm the quantization quality.

@dbogunowicz dbogunowicz changed the base branch from main to sa/quant_mod_refactor May 2, 2024 11:35
@dbogunowicz dbogunowicz changed the title [One-shot MOE model] Support for Qwen2-MOE [WiP][One-shot MOE model] Support for Qwen2-MOE May 6, 2024
@dbogunowicz dbogunowicz changed the base branch from sa/quant_mod_refactor to feature/damian/bump_transformers_440 May 6, 2024 11:49
@dbogunowicz dbogunowicz changed the base branch from feature/damian/bump_transformers_440 to sa/quant_mod_refactor May 6, 2024 11:49
@dbogunowicz dbogunowicz changed the title [WiP][One-shot MOE model] Support for Qwen2-MOE [WiP][MOE Quantization] Support for Qwen2-MOE May 6, 2024
@dbogunowicz dbogunowicz force-pushed the feature/damian/moe branch from 31d86d5 to 139f388 Compare May 6, 2024 11:55
Copy link
Contributor

@bfineran bfineran left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's move this to an examples/integrations directory

Base automatically changed from sa/quant_mod_refactor to main May 6, 2024 20:02
@dbogunowicz dbogunowicz changed the base branch from main to feature/damian/bump_transformers_440 May 7, 2024 10:28
@dbogunowicz dbogunowicz changed the title [WiP][MOE Quantization] Support for Qwen2-MOE [MOE Quantization] Warn against "undercalibrated" modules May 7, 2024
@dbogunowicz dbogunowicz changed the base branch from feature/damian/bump_transformers_440 to main May 10, 2024 12:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants